report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The legislative conditions that the Congress placed on the use of fiscal year 2006 ACE appropriated funds have been either partially or fully satisfied by the latest expenditure plan and related program documentation and activities. However, more can be done to better address several aspects of these conditions. For example: One legislative condition states that the plan should meet OMB’s capital planning and investment control review requirements, which include addressing security and privacy issues. However, a privacy impact assessment for ACE has been in draft for several months and is not yet approved. Another capital planning and investment control review requirement is that performance goals and measures be provided in the business case for ACE. Although CBP describes selected performance goals and measures, the goals (i.e., targets) are not always realistic (we provide further discussion of this issue later in this report). According to another legislative condition, the expenditure plan must comply with DHS’s enterprise architecture. However, DHS does not have a documented methodology for evaluating programs for compliance with its enterprise architecture, other than relying on the professional expertise of its staff. According to a third legislative condition, the DHS Chief Information Officer is to certify that an independent verification and validation (IV&V) agent is under contract. Although DHS satisfied this condition, the scope of the IV&V contractor’s activities is not consistent with the operative industry standard, which states that IV&V should extend to key system products and development processes. CBP has addressed some recommendations, while progress has been slow on others. Each recommendation, along with the status of actions to address it, is summarized below. Ensure that future expenditure plans are based on cost estimates that are reconciled with independent cost estimates. Complete. In October 2005, CBP, with contractor support, compared the program plan cost estimate with the independent cost estimate. According to the analysis performed, the two estimates are consistent. Develop and implement a rigorous and analytically verifiable cost estimating program that embodies the tenets of effective estimating, as defined in the institutional and project-specific estimating models developed by the Software Engineering Institute (SEI). In progress. CBP has taken steps such as (1) hiring a contractor to develop cost estimates (including contract task order cost estimates) that are independent of CBP’s estimates, and (2) tasking a support contractor with evaluating both the independent and CBP estimates against the criteria defined by SEI. According to the results of the support contractor’s evaluation, the independent estimates satisfied the SEI criteria; CBP’s estimates largely satisfied the criteria. However, according to the support contractor, CBP’s cost estimating had limitations. First, the CBP estimate did not adequately consider past projects in its cost and schedule estimates. In addition, the CBP estimate was an aggregation of estimates developed separately for three ACE components, each according to a different cost estimating methodology; the support contractor advised against this approach, recommending that component estimates be based on the same methodology. Immediately develop and implement a human capital management strategy that provides both near- and long-term solutions to program office human capital capacity limitations, and report quarterly to the Appropriations Committees on the progress of efforts to do so. In progress. CBP has expanded its contractor and government workforce dedicated to the ACE program by merging staff assigned to trade-related legacy systems with the ACE program staff. In addition, it is beginning to use subject matter experts from existing field operations advisory boards to help program officials define requirements for future releases. However, it does not have a documented human capital strategy covering its ACE program. Have future ACE expenditure plans specifically address any proposals or plans, whether tentative or approved, for extending and using ACE infrastructure to support other homeland security applications, including any impact on ACE of such proposals and plans. In progress. The expenditure plan describes steps both planned and under way to ensure that ACE infrastructure supports both ACE and other homeland security applications. For example, it states that both ACE and the United States Visitor and Immigrant Status Indicator Technology (US- VISIT) program should conform to the DHS enterprise architecture, which is to define standard shared services that the two systems can request. Such a services oriented architecture is intended to promote reuse, as well as reducing overlap and duplication. Define measures, and collect and use associated metrics, for determining whether prior and future program management improvements are successful. In progress. CBP continues to make changes that are intended to improve overall program management, but it has not consistently defined measures to determine whether the changes are successful. For example, CBP has reorganized its Office of Information Technology; this reorganization is intended to improve program management by providing (1) enhanced government oversight of ACE development, (2) better definition of requirements for future ACE releases, and (3) faster and cheaper delivery of ACE capabilities. However, program officials told us that they have not established measures or targets for determining whether the reorganization is providing these benefits. Define and implement an ACE accountability framework that fulfills 1. The framework should cover all program commitment areas, including key expected or estimated system (a) capabilities, use, and quality; (b) benefits and mission value; (c) costs; and (d) milestones and schedules. In progress. CBP has prepared an initial version of an accountability framework that it intends to improve as it proceeds. The framework is built around measuring progress against costs, milestones, schedules, and risks for select releases; however, the benefit measurement has not been well defined, and the performance targets are not always realistic. 2. The framework should ensure currency, relevance, and completeness of all program commitments made to the Congress in expenditure plans. In progress. The fiscal year 2006 expenditure plan includes inaccurate, dated, and incomplete information and omits other relevant information. For example, the plan did not include information regarding CBP’s decision to eliminate the dependencies among the screening and targeting releases and the cargo releases, and to take advantage of the capabilities of its existing Automated Targeting System.3. The framework should ensure reliable data that are relevant to measuring progress against commitments. In progress. The data that CBP uses to measure progress against commitments are not consistently reliable. For example, data in the defect tracking system show that defects in Release 4 (which is now operational) have not been closed; however, program officials told us that many of these defects have been resolved. 4. The framework should ensure that future expenditure plans report progress against commitments contained in prior expenditure plans. In progress. The current expenditure plan does not adequately describe progress against commitments made in previous plans. For example, the plan provides a summary of the funding requested in each of the previous six expenditure plans, but it does not provide information on whether these funding amounts were actually expended or obligated as planned. 5. The framework should ensure that criteria for exiting key readiness milestones adequately consider indicators of system maturity, such as the severity of open defects. In progress. ACE milestone exit criteria provide for addressing the risk associated with severe defects that are unresolved. Using these criteria, CBP passed several release milestones with severe defects still open. However, CBP officials were unable to provide us with any documentation on how they assessed the inherent risks of passing these milestones with open severe defects. 6. The framework should ensure clear and unambiguous delineation of the respective roles and responsibilities of the government and the prime contractor. Complete. The current ACE program plan describes general roles and responsibilities for the government and the prime contractor. More detailed roles have been documented in a roles and responsibilities matrix that assigns primary responsibility for each activity. Report quarterly to the House and Senate Appropriations Committees on efforts to address our open recommendations. In progress. CBP submitted quarterly reports to both Committees on its efforts to address our open recommendations; however, progress in addressing our recommendations was not always reported accurately. We have several observations about the development of ACE releases, as well as several more concerning the performance of ACE releases that are deployed and operating. ACE development: Steps have been taken to address a past pattern of ACE release shortfalls, but new release management weaknesses are emerging. As we have previously observed, CBP established a pattern of borrowing resources from future releases to address problems with the quality of earlier releases; this led to schedule delays and cost overruns. This pattern has continued with the most recently deployed cargo release, which developed problems that caused delays with a subsequent screening and targeting release. CBP took steps to mitigate this problem by eliminating the dependencies between the cargo releases and the screening and targeting releases. However, CBP’s planned schedule for developing additional releases includes a significant level of concurrence, because of CBP’s interest in delivering ACE functionality sooner. Such concurrence between ACE release activities has led to cost overruns and schedule delays in the past. Thus, the revised ACE plans and actions are potentially reintroducing the same problems that produced past shortfalls. We made several specific observations related to these weaknesses, including the following: On two recent releases, key milestones were passed despite unresolved severe defects. Officials told us that the risk of proceeding did not outweigh the need to get the releases to users, and thereby gaining user acceptance and feedback. However, the risks were not documented and formally managed. Concurrence in developing early ACE releases caused schedule slips and cost overruns. Despite these experiences, CBP has established a risky plan that involves considerable overlap across the development schedules for three future releases. Although the use of earned value management (EVM) is an OMB requirement, it was not being used to manage the development of two recent releases. For example, CBP discontinued use of EVM on one release because this method was not familiar to staff who were transferred to work on the program. ACE operations: Operational performance has been mixed, and mission impact is unclear. ACE releases one through four are in operation. To date, these releases’ operational performance has been uneven. For example, ACE has largely been meeting its goals for being available and responsive in processing virtually all daily transactions, and has decreased truck processing times at some ports. However, ACE is not being used by as many CBP and trade personnel as was expected, and truck processing times at other ports have increased. Moreover, overall user satisfaction has been low. In addition, ACE goals, expected mission benefits, and performance measures are not fully defined and adequately aligned with each other. For example, not every goal has defined benefits, every benefit is defined only in terms of efficiency gains, not every benefit has an associated business result, and not every benefit and business result has associated performance measures. Further, where performance measures have been defined, the associated targets are not always realistic. For example, the performance target in fiscal year 2005 for ACE usage was that 11 percent of all CBP employees would use ACE. However, that many CBP employees will never need to use the system. This performance target does not reflect that. Because performance measures are not always realistic or aligned with program goals and benefits, it is unclear whether ACE has realized—or will realize—the mission value that it was intended to bring to CBP’s and other agencies’ trade- and border security-related operations. The legislative conditions that the Congress placed on the use of fiscal year 2006 ACE appropriated funds have been either partially or fully satisfied by the latest expenditure plan and related program documentation and activities. Nevertheless, more can be done to better address several aspects of these conditions, such as ensuring that the program’s privacy impact assessment is approved, measuring ACE performance and results, ensuring architectural alignment, and employing effective IV&V practices. Given that the legislative conditions are collectively intended to promote accountability and increase the chances of program success, it is important that each receives DHS’s full attention. Also important to ACE’s success is the swift and complete implementation of the recommendations that we have previously made to complement the legislative conditions and improve program management, performance, and accountability. In this regard, some recommendations have been addressed, while progress has been slow on others, such as accurately reporting to the Appropriations Committees on CBP’s progress in implementing our prior recommendations; developing and implementing a strategic approach to meeting the program’s human capital needs; using criteria for exiting key milestones that adequately consider indicators of system maturity, such as severity of open defects and the associated risks; and developing and implementing a performance and accountability framework for ensuring that promised capabilities and benefits are delivered on time and within budget. To its credit, CBP has taken several steps to stem the pattern of cost, schedule, and performance shortfalls that it experienced on early ACE releases. However, future releases are unlikely to realize the impact of these steps because revised ACE plans and actions are reintroducing the same pattern that led to early release shortfalls. This pattern includes not formally and transparently considering, and proactively addressing, the risks associated with passing key release milestones with known severe defects; building considerable overlap and concurrence in the development schedules of releases that will contend for the same resources; and not performing EVM on all releases. If this pattern continues, the prospects for a successful program will be diminished. Although availability and responsiveness targets are largely being met and long-standing help desk limitations are being addressed, the prospects for a successful program nevertheless remain unclear. The true measure of ACE’s success is arguably the mission value that it brings to CBP’s and other agencies’ trade- and border security-related operations and users. Such value depends both on the operational performance of ACE and on CBP’s ability to demonstrate that this performance is achieving program goals, delivering expected benefits, and producing desired business results. At this juncture, however, neither the system’s performance nor its value is clear because of several factors: the operational performance of deployed releases has been mixed; users’ satisfaction has been low; the relationships among goals, benefits, and desired business outcomes are not evident; and the range of measures needed to create a complete and realistic picture of ACE’s performance is missing. In summary, a number of ACE activities have been and are being done well; these have contributed to the program’s progress to date and will go a long way in determining the program’s ultimate success. However, it will be important for CBP to effectively address long-standing ACE management challenges along with emerging problems. Until it does so, ACE will remain a risky program. To assist CBP in managing ACE—and increasing the chances that it will deliver required capabilities on time and within budget, demonstrating promised mission benefits and results—we recommend that the Secretary of Homeland Security direct the appropriate departmental officials to fully address those legislative conditions associated with having an approved privacy impact assessment and ensuring architectural alignment. We also recommend that the Secretary, through CBP’s Acting Commissioner, direct the Assistant Commissioner for Information and Technology to fully address those legislative conditions associated with measuring ACE performance and results and employing effective IV&V practices; accurately report to the appropriations committees on CBP’s progress in implementing our prior recommendations; include in the June 30, 2006, quarterly update report to the appropriations committees a strategy for managing ACE human capital needs and the ACE framework for managing performance and ensuring accountability; document key milestone decisions in a way that reflects the risks associated with proceeding with unresolved severe defects and provides for mitigating these risks; minimize the degree of overlap and concurrence across ongoing and future ACE releases, and capture and mitigate the associated risks of any residual concurrence; use EVM in the development of all existing and future releases; develop the range of realistic ACE performance measures and targets needed to support an outcome-based, results-oriented accountability framework, including user satisfaction with ACE; and explicitly align ACE program goals, benefits, desired business outcomes, and performance measures. In written comments on a draft of this report signed by the Director, Departmental GAO/OIG Liaison, DHS agreed with our findings concerning progress in addressing our prior recommendations, and it agreed with the recommendations in this report. DHS also described actions that it has under way and planned to address the recommendations. The department’s comments are reprinted in appendix II. We are sending copies of this report to the Chairmen and Ranking Minority Members of other Senate and House committees and subcommittees that have authorization and oversight responsibilities for homeland security. We are also sending copies to the DHS Secretary, the CBP Commissioner and, upon their request, to other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your offices have any questions on matters discussed in this report, please contact me at (202) 512-3459 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and key contributors to this report are listed in appendix III. Support border security by enhancing analysis and information sharing with other government agencies to deal with increasing new security threats to our nation. Provide CBP personnel with the technology and information needed to decide, before a shipment reaches the border, what should be targeted because it is a security threat, and what should be expedited because it complies with U.S. laws. Provide an integrated, fully automated information system to enable the efficient collection, processing, and analysis of commercial import and export data. Streamline time-consuming and labor-intensive tasks for CBP personnel and the trade community, through a single, Web-based interface, reducing costs for the government and the trade community. Enable users to process, view, and manage their accounts nationally, and obtain historical information on cargo, conveyances, and crew, based on screening and targeting rules. Enable CBP to comply with legislative mandates to improve efficiency/effectiveness and provide better customer service to U.S. citizens. CBP was formed from the former U.S. Customs Service and other entities with border protection responsibility. OMB Circular A-11 establishes policy for planning, budgeting, acquisition, and management of federal capital assets. Secretary of Homeland Security, and OMB; and 6. is reviewed by GAO. On February 2, 2006, DHS submitted its fiscal year 2006 expenditure plan for $316.8 million to the House and Senate Appropriations Subcommittees on Homeland Security. DHS currently plans to acquire and deploy ACE in 11 increments, referred to as releases. The first three releases are fully deployed and operating, and the fourth release is being deployed. Other releases are in various stages of definition and development. The purpose of the Investment Review Board is to integrate capital planning and investment control, budgeting, acquisition, and management of investments. It is also to ensure that spending on investments directly supports and furthers the mission and that this spending provides optimal benefits and capabilities to stakeholders and customers. determine whether the ACE fiscal year 2006 expenditure plan satisfies the determine the status of our open recommendations on ACE, and provide any other observations about the expenditure plan and DHS’s management of the ACE program. We conducted our work at CBP headquarters and contractor facilities in the Washington, D.C., metropolitan area, as well as at the port in Blaine, Washington, from July 2005 through March 2006, in accordance with generally accepted government auditing standards. Details of our scope and methodology are provided in attachment 1. established by OMB, including OMB Circular A-11, part 7. 2. Complies with DHS’s enterprise architecture. 3. Complies with the acquisition rules, requirements, guidelines, and systems acquisition management practices of the federal government. 4. Includes a certification by the Chief Information Officer of DHS that an independent verification and validation agent is currently under contract. 5. Is reviewed and approved by the DHS Investment Review Board, Secretary of Homeland Security, and OMB. 6. Is reviewed by GAO. accurately reporting to the Appropriations Committees on CBP’s progress in implementing our prior recommendations; developing and implementing a strategic approach to meeting the program’s using criteria for exiting key milestones that adequately consider indicators of system maturity, such as severity of open defects and the associated risks; and developing and implementing a performance and accountability framework for ensuring that promised capabilities and benefits are delivered on time and within budget. The following table summarizes the status of each of the open recommendations. estimates that are reconciled with independent cost estimates. 2. Develop and implement a rigorous and analytically verifiable cost estimating program. 3. Immediately develop and implement a human capital management strategy that provides both near- and long-term solutions; develop and implement missing human capital practices. Number of months open Status 4. Have future ACE expenditure plans specifically address any proposals or plans for extending and using ACE infrastructure to support other homeland security applications. 5. Define measures, and collect and use associated metrics, for determining whether prior and future program management improvements are successful. 6. Define and implement an ACE accountability framework that a. coverage of all program commitment areas, including key expected or estimated system (1) capabilities, use, and quality; (2) benefits and mission value; (3) costs; and (4) milestones and schedules. b. currency, relevance, and completeness of all such commitments made to the Congress in expenditure plans. 11 c. reliable data relevant to measuring progress against commitments. d. reporting in future expenditure plans progress against commitments contained in prior expenditure plans. e. use of criteria for exiting key readiness milestones that adequately consider indicators of system maturity, such as severity of open defects. f. clear and unambiguous delineation of the respective roles and responsibilities of the government and the prime contractor. 7. Report quarterly to the House and Senate Appropriations Committees on efforts to address open GAO recommendations. Release 4 pilot revealed performance problems that caused the pilot period to be extended and the pilot scope to be reduced. Release 4 operational readiness review was passed despite unresolved severe defects, and Release 4 is now being deployed. Release 4 quality problems and enhancement needs have led to changes in how ACE release requirements are defined. Release 4 problems delayed Screening 1 and led to a revised strategy for delivering all screening and targeting releases. Screening 1 key milestones were passed despite unresolved severe defects. and 4 makes new strategy of concurrently developing Releases 5, 6, and 7 risky. Earned value management (EVM), a technique for measuring progress toward meeting deliverables, is not being used to manage Screening 1 and Release 5. ACE’s operational performance has been mixed, and mission impact is unclear. Availability and responsiveness performance targets are largely being met. Processing times for trucks crossing the border at key ports vary. Long-standing help desk limitations are being addressed. Usage by CBP and the trade is lower than expected. User satisfaction was reported as low. Performance targets are not always realistic. Goals, expected mission benefits, and performance measures are not adequately aligned. minimize the degree of overlap and concurrency across ongoing and future ACE releases, and capture and mitigate the associated risks of any residual concurrency; use EVM in the development of all existing and future releases; develop the range of realistic ACE performance measures and targets needed to support an outcome-based, results-oriented accountability framework, including user satisfaction with ACE; and explicitly align ACE program goals, benefits, desired business outcomes, and performance measures. ACE is to support eight major CBP business areas. 1. Release1 Processing: Processing of cargo for import or export; tracking of conveyances, cargo, and crew; and processing of in-bond, warehouse, Foreign Trade Zone, and special import and export entries. 2. Entry2 Processing: Liquidation and closeout of entries and entry summaries related to imports, and processing of protests and decisions. 3. Finance: Recording of revenue, performance of fund accounting, and maintenance of the general ledger. 4. Account Relationships: Maintenance of trade accounts, their bonds and CBP-issued licenses, and their activity. 5. Legal and Policy: Management of import and export legal, regulatory, policies and procedures, and rulings issues. A release is the act of CBP permitting imported merchandise to enter the United States. An entry is the documentation required to be submitted to CBP in order for it to permit imported merchandise to enter the United States. Screening is the method of determining high-risk people or shipments before their arrival at a port. Targeting is the risk-based determination of whether a shipment should undergo additional documentary review or physical inspection. The Client Tier includes user workstations and external system interfaces. The Presentation Tier provides the mechanisms for the user workstations and external systems to access ACE. The Integration Services Tier provides the middleware for integrating and routing information between ACE software applications and legacy systems. The Applications Tier includes the ACE software applications comprising commercial products (e.g., SAP1) and custom-developed software that provide the functionality supporting CBP business processes. The Data Tier provides the data management and warehousing services for ACE, including database backup, restore, recovery, and space management. Security and data privacy are to be embedded in all five layers. SAP is a commercial enterprise resource planning software product that has multiple modules, each performing separate but integrated business functions. ACE will use SAP to support many of its business processes and functions. CBP’s Modernization Office is also using SAP as part of a joint project with its Office of Finance to support financial management, procurement, property management, cost accounting, and general ledger processes. The following presents the functionality provided by the 11 ACE releases, their status, and associated plans. Release 1 (ACE Foundation): Provide information technology (IT) infrastructure— computer hardware and system software—to support subsequent system releases. This release was deployed in October 2003 and is operating. Release 2 (Account Creation): Give initial group of CBP national account managers1 and importers access to account information, such as trade activity. This release was deployed in October 2003 and is operating. Release 3 (Periodic Payment): Provide additional account managers and importers, as well as brokers and carriers,2 access to account information; provide initial financial transaction processing and CBP revenue collection capability, allowing importers and their brokers to make monthly payments of duties and fees. This release was deployed in July 2004 and is operating. CBP national account managers work with the largest importers. Brokers obtain licenses from CBP to conduct business on behalf of the importers by filling out paperwork and obtaining a bond; carriers are individuals or organizations engaged in transporting goods for hire. Background Summary of ACE Releases Release 4 (e-Manifest: Trucks): Provide electronic truck manifest1 processing and interfacing to legacy enforcement systems and databases. As discussed later, this release is operating at 39 truck border crossings as of March 8, 2006. Additional enhancement releases for Release 4 have been deployed since May 2005. Screening 1 (Screening Foundation): Establish the foundation for screening cargo and conveyances by centralizing criteria and results into a single standard database; allow users to define and maintain data sources and business rules. This release is scheduled for deployment beginning in March 2006. Screening 2 (Targeting Foundation): Establish the foundation for advanced targeting capabilities by enabling CBP’s National Targeting Center to search multiple databases for relevant facts and actionable intelligence. This release is scheduled for deployment in two drops: Screening 2 Targeting Platform (TP): Provide a platform to collect and search relevant data and other information from multiple databases. This drop is scheduled for deployment beginning in June 2006. Manifests are lists of passengers or invoices of cargo for a vehicle, such as a truck, ship, or plane. add new data sources, enhance screening business rules, and provide reporting capabilities. This drop is scheduled for deployment beginning in October 2006; however, CBP deployed a prototype to the National Targeting Center as part of an effort to gather detailed requirements. Release 5 (Entry Summary, Accounts, and Revenue): Leverage SAP technologies to enhance and expand accounts management, financial management, and entry summary functionality. This release is being developed in two drops: Master Data and Enhanced Accounts (Drop A1): Use SAP to deliver enhanced account creation and maintenance functionality and expand the types of accounts managed in ACE. This drop is scheduled for deployment beginning in May 2007. Entry Summary and Revenue (Drop A2): Expand ACE to encompass entry summary, interfaces with participating government agencies, calculation of duties and fees, reconciliation processing, and refunds. This drop is scheduled for deployment beginning in July 2008. Background Summary of ACE Releases Screening 3 (Advanced Targeting Capabilities): Provide enhanced screening for reconciliation, intermodal manifest, Food and Drug Administration data, and in- bond, warehouse, and Foreign Trade Zone authorized movements; integrate additional data sources into targeting capability; and provide risk management capability. This release is scheduled for deployment beginning in February 2007. Screening 4 (Full Screening and Targeting): Provide full screening and targeting functionality supporting all modes of transportation and all transactions within the cargo management life cycle, including enhanced screening and targeting capability with additional technologies. This release is scheduled for deployment beginning in December 2008. The multimodal manifest involves the processing and tracking of cargo as it transfers between different modes of transportation, such as cargo that arrives by ship, is transferred to a truck, and then is loaded onto an airplane. functionality to rail and sea shipments; convert rail, sea, and truck electronic manifests into the multimodal manifest. Drop M1 is scheduled for deployment beginning in July 2008. E-Manifest: Air (Drop M2): Provide the electronic manifest capability to air shipments, and bring all modes of transportation into the multimodal manifest. Drop M2 is scheduled for deployment beginning in October 2007. E-Manifest: Enhanced Tracking (Drop M3): Provide the capability to track cargo, conveyances, individuals, and equipment, providing more timely and accurate shipment status information. Drop M3 is scheduled for deployment beginning in June 2009. Background Summary of ACE Releases Release 7 (Exports and Cargo Control): Implement the remaining accounts management, revenue, manifest, and release and export functionality. This release is planned for development in two drops: ESAR: Drawback, Protest, and IASS (Drop A3): Provide the import activity summary statement (IASS),1 drawback functionality, and enhanced protest; provide on-line processing for trade account applications. Drop A3 is scheduled for deployment beginning in December 2009. E-Manifest: Final Exports and Manifest (Drop M4): Extend the electronic manifest for mail, pipeline, and hand carry; provide for electronic export processing. Drop M4 is scheduled for deployment beginning in December 2009. An import activity summary statement is a summary of an importer’s shipment activities over a specific period of time that is transmitted electronically to CBP on a periodic basis by importers and brokers. ACE Satisfaction of Modernization Act Requirements ACE is intended to support CBP satisfaction of the provisions of Title VI of the North American Free Trade Agreement, commonly known as the Modernization Act. Subtitle B of the Modernization Act contains the various automation provisions that were intended to enable the government to modernize international trade processes and permit CBP to adopt an informed compliance approach with industry. The following table illustrates how each ACE release is to fulfill the requirements of Subtitle B. Initial program and project management; continued by task 009. Initial enterprise architecture and system engineering; continued by task 010. Initial requirements development and program planning effort; continued by tasks for specific increments/releases. Design, development, testing, and deployment of Releases 1 and 2 (initially intended to build Increment 1, which was subsequently divided into four releases) March 2004 Development of Release 5 project plan, documentation of ACE business processes, and development of an ACE implementation strategy. Design, development, and testing of Releases 3 and 4, and deployment of Release 3. Follow-on to task 001 to continue program and project management activities. Follow-on to task 002 to continue enterprise architecture and system engineering activities; continued by task 017. March 2003 Acquisition and setup of the necessary infrastructure and facilities for the contractor to design, develop, and test releases. Establishment of the infrastructure to operate and maintain releases. Conversion of scripts for interfacing desktop applications (MS Word and Excel) and mainframe computer applications. March 2004 Development, demonstration, and delivery of a prototype to provide CBP insight into whether knowledge-based risk management should be used in ACE. Enterprise process improvement integration. International Trade Data System (ITDS) Assistance for participating government agencies to define requirements for an integrated ACE/ITDS system. Development and demonstration of technology prototypes to provide CBP insight into whether the technologies should be used in ACE. Program management and support to organizational change management through activities such as impact assessments, end user training, communication, and outreach. Coordination of program activities and alignment of enterprise objectives and technical plans through architecture and engineering activities. Application of the CBP Enterprise Life Cycle Methodology to integrate multiple projects and other ongoing Customs operations into CBPMO. Follow-on to task 012 includes establishment, integration, configuration, and maintenance of the infrastructure to support Releases 2, 3, and 4. Design, develop, test, and deploy the Screening and Targeting (S&T) operational capability. Project definition and initial design for Release 5; initial project authorization and definition for Release 6. Chronology of Seven ACE Expenditure Plans Since March 2001, seven ACE expenditure plans have been submitted.1 Collectively, the seven plans have identified a total of $1,698.1 million in funding. On March 26, 2001, CBP submitted to its appropriations committees the first expenditure plan seeking $45 million for the modernization contract to sustain Customs and Border Protection Modernization Office (CBPMO) operations, including contractor support. The appropriations committees subsequently approved the use of $45 million, bringing the total ACE funding to $50 million. On February 1, 2002, the second expenditure plan sought $206.9 million to sustain CBPMO operations; define, design, develop, and deploy Increment 1, Release 1 (now Releases 1 and 2); and identify requirements for Increment 2 (now part of Releases 5, 6, and 7 and Screenings 1 and 2). The appropriations committees subsequently approved the use of $188.6 million, bringing total ACE funding to $238.6 million. In March 2001, appropriations committees approved the use of $5 million in stopgap funding to fund program management office operations. Chronology of Seven ACE Expenditure Plans On May 24, 2002, the third expenditure plan sought $190.2 million to define, design, develop, and implement Increment 1, Release 2 (now Releases 3 and 4). The appropriations committees subsequently approved the use of $190.2 million, bringing the total ACE funding to $428.8 million. On November 22, 2002, the fourth expenditure plan sought $314 million to operate and maintain Increment 1 (now Releases 1, 2, 3, and 4); to design and develop Increment 2, Release 1 (now part of Releases 5, 6, and 7 and Screening 1); and to define requirements and plan Increment 3 (now part of Releases 5, 6, and 7 and Screenings 2, 3, and 4). The appropriations committees subsequently approved the use of $314 million, bringing total ACE funding to $742.8 million. Chronology of Seven ACE Expenditure Plans On January 21, 2004, the fifth expenditure plan sought $318.7 million to implement ACE infrastructure; to support, operate, and maintain ACE; and to define and design Release 6 (now part of Releases 5, 6, and 7) and Selectivity 2 (now Screening 2 and 3). The appropriations committees subsequently approved the use of $316.8 million, bringing total ACE funding to $1,059.6 million. On November 8, 2004, the sixth expenditure plan sought $321.7 million for design and development of Release 5 and Screening 2, definition of Screening 3, ACE program management, architecture and engineering, and operations and maintenance. The appropriations committees subsequently approved the use of $321.7 million, bringing total ACE funding to $1,381.3 million. Chronology of Seven ACE Expenditure Plans On February 02, 2006, CBP submitted its seventh expenditure plan, seeking $316.8 million for detailed design and development of Release 5, development of Release 6, deployment of Screening 2, development and deployment of Screening 3, program management and operations, and ACE operations, maintenance, and infrastructure implementation. Background ACE Testing and Related Milestones Development of each ACE release includes system integration and system acceptance testing, followed by a pilot period that includes user acceptance testing. Generally, the purpose of these tests is to ensure that the system meets defined system requirements or satisfies user needs. The associated readiness reviews are to ensure that the system is ready to proceed to the next stage of testing or operation. Tests and their related milestones are described in the following table. Verify that related system, subsystem, or module components are capable of integrating and interfacing with each other. Verify that the developed system, subsystem, or module operates in accordance with requirements. Verify that the functional scope of the release meets the business functions for the users. Defect prevents or precludes the performance of an operational or mission- essential capability, jeopardizes safety or security, or causes the system, application, process, or function to fail to respond or to end abnormally. Severe (Severity 2) Defect prevents or precludes system from working as specified and/or produces an error that degrades or impacts the system or user functionality. Moderate (Severity 3) Defect prevents or precludes system from working as specified and/or produces an error that degrades or impacts the system or user functionality. An acceptable (reasonable and effective) work-around is in place that rectifies the defect until a permanent fix can be made. Minor (Severity 4) Defect is inconsequential, cosmetic, or inconvenient and does not prevent users from using the system to accomplish their tasks. In addition to the person named above, Justin Booth, Barbara Collier, William Cook, Neil Doherty, Michael Marshlick, Shannin O’Neill, Tomas Ramirez, and Jennifer Vitalbo made key contributions to this report.
The Department of Homeland Security (DHS) is conducting a multiyear, multibillion-dollar acquisition of a new trade processing system, planned to support the movement of legitimate imports and exports and strengthen border security. By congressional mandate, plans for expenditure of appropriated funds on this system, the Automated Commercial Environment (ACE), must meet certain conditions, including GAO review. This study addresses whether the fiscal year 2006 plan satisfies these conditions; it also describes the status of DHS's efforts to implement prior GAO recommendations for improving ACE management, and provides observations about the plan and DHS's management of the program. The fiscal year 2006 ACE expenditure plan, including related program documentation and program officials' statements, either satisfied or partially satisfied the legislative conditions imposed by the Congress; however, more can be done to better address several aspects of these conditions. In addition, DHS has addressed some recommendations that GAO has previously made, but progress has been slow in addressing several recommendations aimed at strengthening ACE management. For example, DHS has more to do to implement the recommendation that it establish an ACE accountability framework that, among other things, ensures that expenditure plans report progress against commitments made in prior plans. Implementing a performance and accountability framework is important for ensuring that promised capabilities and benefits are delivered on time and within budget. In addition, describing progress against past commitments is essential to permit meaningful congressional oversight. Among GAO's observations about the ACE program and its management are several related to the need to effectively set and use performance goals and measures. Although the program set performance goals, these targets were not always realistic. For example, in fiscal year 2005, the program set a target that 11 percent of all Customs and Border Protection (CBP) employees would use ACE. However, this target does not reflect the fact that many CBP employees will never need to use the system. Additionally, the program has established 6 program goals, 11 business results, 23 benefits, and 17 performance measures, but the relationships among these are not fully defined or adequately aligned with each other. For example, not every goal has defined benefits, and not every benefit has an associated performance measure. Without realistic ACE performance measures and targets that are aligned with the overall program goals and desired results, DHS will be challenged in its efforts to establish an accountability framework for ACE that will help to ensure that the program delivers its expected benefits. In addition, DHS plans to develop several increments, referred to as "releases," concurrently; in the past, such concurrency has led to cost overruns and schedule delays because releases contended for the same resources, and resources that were to be used on later releases were diverted to earlier ones. However, because of DHS's belief that such concurrent development will allow it to deliver ACE functionality sooner, it is reintroducing the same problems that resulted in past shortfalls.
UN Secretariat procurement has tripled over the past decade, as peacekeeping operations have grown. The UN procures a variety of goods and services, including freight services, motor vehicle parts, and telecommunications equipment. UN procurement is conducted by the Department of Management and the Department of Peacekeeping Operations through its field missions. Internal controls can be used to manage an organization and to identify areas of weaknesses and strengths in the procurement process. In 2004, the UN spent a total of $1.31 billion on procurement, with $276 million being devoted to air transportation services alone. The UN procures a variety of goods and services, including air transportation, freight forwarding and delivery, motor vehicle parts and transportation equipment, and telecommunications equipment and services, as shown in figure 1. The Department of Management is responsible for procurement in the UN Secretariat. The department’s Procurement Service develops policies and procedures for headquarters and field procurement based on the UN Financial Regulations and Rules. The Procurement Service also oversees training for all staff involved in procurement. In addition, it also provides advisory support for field purchases within the authority of field missions. The Procurement Service also negotiates, prepares, and administers contracts for goods and services for UN headquarters and certain large contracts (such as air transportation services or multi-year systems contracts) for peacekeeping missions. The UN’s procurement process at headquarters involves several steps (see fig. 7, app. II). The UN’s field procurement process involves the Peacekeeping Department and includes additional steps (see fig. 6, app. II). Each field mission is required to establish a Local Committee on Contracts to review and recommend contract awards for approval by the chief administrative officer of the mission. The Local Committee on Contracts is composed of four members from the mission: a legal advisor, and the respective chiefs of the mission’s finance, general services, and transport sections. Field missions may not award contracts worth more than $75,000 without approval by the chief administrative officer of the mission based on the advice of the Local Committee on Contracts. In addition, field missions may not award contracts worth more than $200,000 without approval first by the mission’s chief administrative officer based on the advice of the Local Committee on Contracts and then by the Department of Management based on the advice of the Headquarters Committee on Contracts. The Headquarters Committee on Contracts evaluates the proposed contracts and advises the Department of Management as to whether the contracts are in accordance with UN Financial Regulations and Rules and other UN policies. The Procurement Service employs about 70 individuals to procure items for the UN’s headquarters operations, peacekeeping missions, international criminal tribunals, regional commissions, and upon request, other UN agencies and subsidiary organs. The Peacekeeping Department employs about 270 field procurement staff at its field missions. The members of the Headquarters Committee on Contracts are drawn from the UN Office of Central Support Services; Office of Programme Planning, Budget and Accounts; Office of Legal Affairs; and the Department of Economic and Social Affairs. The committee relies on support staff to assist it in coordinating its activities and drafting minutes detailing the results of its meetings. UN procurement has more than tripled over the past decade as UN peacekeeping operations have grown. UN procurement grew from $430 million in 1997 to $1.6 billion in 2005. This increase has been due primarily to a rapid increase in UN peacekeeping operations. Eight of the UN’s 15 current peacekeeping missions were established in 1999 or later. Peacekeeping expenditures more than quadrupled between 1999 and 2005, as shown in figure 2. In addition, the number of military personnel in peacekeeping missions has increased fivefold, from about 14,000 in 1999 to about 73,000 as of February 2006. Although peacekeeping missions were originally conceived as short term in nature, many missions are long- standing and have continued for more than 5 years. As shown in figure 3, 85 percent of UN procurement in 2004 was conducted in support of peacekeeping operations. Peacekeeping field missions alone procured 35 percent of all UN procurement in 2004. Internal control is an integral part of managing an organization and can be used to identify areas of weaknesses and strengths in the UN’s procurement process. Internal control guidance provides a framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. Internal control comprises the plans, methods, and procedures used to meet missions, goals, and objectives. There are five interrelated components of internal control: (1) control environment, (2) control activities, (3) risk assessment, (4) information and communications, and (5) monitoring. UN funds are unnecessarily vulnerable to fraud, waste, and abuse because the UN lacks an effective organizational structure for managing procurement, has not demonstrated a commitment to improving its professional procurement workforce, and has failed to adopt specific ethics guidance for procurement officials. These conditions have weakened the UN control environment for procurement. The UN has not established a single organizational entity or mechanism capable of comprehensively managing procurement. As a result, it is unclear which department is accountable for addressing problems in the UN’s field procurement process. While the Department of Management is ultimately responsible for all UN procurement, neither it nor the UN Procurement Service has the organizational authority to supervise peacekeeping field procurement staff to provide reasonable assurance that they comply with UN regulations (see fig. 4). Procurement field staff, including the chief procurement officers, instead report to the Peacekeeping Department at headquarters through each mission’s chief administrative officer. Although the Department of Management has delegated authority for procurement of goods and services to the Peacekeeping Department’s Office of Mission Support at headquarters, we found that the Peacekeeping Department lacks the expertise, procedures, and capabilities needed to provide reasonable assurance that its field procurement staff are complying with UN regulations in executing this authority. The UN’s Office of Internal Oversight Services (OIOS) has concluded that neither department has taken reasonable care to safeguard UN assets. The UN previously considered giving the Peacekeeping Department the headquarters positions it needs to oversee field procurement activities. In 2000, a UN panel recommended that the Department of Management transfer procurement positions to the Peacekeeping Department’s headquarters. The Department of Management subsequently drafted a delegation of procurement authority that would have required the Peacekeeping Department to provide trained procurement staff at each peacekeeping mission with procurement authority and to establish headquarters systems to administer and monitor field procurement activities. However, the Department of Management’s delegation of authority did not provide for a transfer of procurement staff to the Peacekeeping Department. According to UN officials, the Peacekeeping Department rejected the proposed delegation of authority because (1) the department lacked resources to oversee field procurement activities, (2) the proposed delegation did not include a transfer of procurement staff to the Peacekeeping Department, and (3) the proposed delegation was more restrictive than previous delegations to the missions themselves. In 2005, the Department of Management delegated procurement authority to the headquarters of the Peacekeeping Department. Although the headquarters of the Peacekeeping Department accepted the delegation, it did so without the resources needed to oversee the execution of the delegation by its field procurement activities. The Peacekeeping Department is seeking to hire an individual to manage its field procurement activities. The individual’s responsibilities would include supervising field procurement activities, developing centralized procurement reporting procedures, monitoring field procurement trends, implementing improved management control tools, and working with the Department of Management to review existing procurement rules and regulations. Given the scope of these responsibilities, it is unclear how one individual would be able to fulfill the responsibilities of the proposed position. As of March 2006, Peacekeeping Department headquarters officials were interviewing candidates for this position. While the Peacekeeping Department field procurement staff may seek guidance on UN procurement policies from the Procurement Service, we found that they do so infrequently. Of the 19 field procurement chiefs or acting chiefs that we spoke with, 12 stated that they contacted Procurement Service staff as often as once a month or once a quarter. Four others stated that they contacted the Procurement Service once or twice a year, and three others stated that they had never contacted the Procurement Service for guidance during the past year. The lack of more frequent contact between the Procurement Service and Peacekeeping Department field staff is significant because field staff face the challenge of complying with the procurement policies established by the Procurement Service while securing needed goods and services under often difficult conditions to meet peacekeeping deadlines. The Peacekeeping Department has characterized its operational needs as being too great and its human resource demands as too intense for the UN’s existing procurement regulations and procedures. While most field officials that we spoke with reported that the Procurement Service had been at least somewhat helpful to them, several stated that they do not believe the Procurement Service understands the difficult field conditions in which they work. The UN has not demonstrated a commitment to improving its professional procurement staff in the form of training, a career development path, and other key human capital practices critical to attracting, developing, and retaining a qualified professional workforce. Due to significant control weaknesses in the UN’s procurement process, the UN has relied disproportionately on the actions of its staff to safeguard its resources. Given this reliance on staff and their substantial fiduciary responsibilities, management’s commitment to maintaining a competent, ethical procurement workforce is a particularly critical element of the UN’s internal control environment. Recent studies indicate that Procurement Service staff and peacekeeping procurement staff lack knowledge of UN procurement policies. For example, a November 2005 consultant report stated that Procurement Service staff did not appear to have a clear understanding of procurement policies and procedures, resulting in inconsistent application of procurement policies. In addition, OIOS found that field procurement staff in one of the largest peacekeeping missions lack sufficient knowledge of basic procurement policies and procedures. OIOS also found that, in another mission, requisitioning units that lacked procurement authority had directly purchased $9 million in goods and services during the start-up phase, violating the principle of segregation of duties between requisitioners and procurement staff. Moreover, most procurement staff lack professional certifications attesting to their procurement education, training, and experience. We found that 16 of 19 field procurement chiefs did not possess any professional certification of their procurement qualifications. A June 2005 consultant survey also found that only 3 of 41 Procurement Service officers and assistants had been certified by a recognized procurement certification body. The UN has not established requirements for headquarters and peacekeeping staff to obtain continuous training, resulting in inconsistent levels of training across the procurement workforce. Most field procurement chiefs stated that the training they had received from the UN was at least generally useful to their procurement roles, but 11 of 19 field procurement chiefs stated that they had received no procurement training over the last year. While most field procurement chiefs stated that their staff are adequately trained, all of them said that their staff would benefit from additional training, citing areas such as contract drafting and management, negotiation, business writing, and bid evaluation. Recently, a UN interagency group developed common certification standards for procurement staff in all UN agencies. However, the decision for a UN agency to participate in the certification program is voluntary. Each UN agency would be responsible for developing its own training materials to prepare staff for certification. As of January 2006, Procurement Service officials had not yet determined whether this common UN procurement certification would be a requirement for existing staff or for hiring new staff. Furthermore, UN officials acknowledged that the UN has not committed sufficient resources to a comprehensive training and certification program for its procurement staff. UN officials stated that Procurement Service training resources are insufficient to deliver adequate training. The Procurement Service’s annual training budget was $20,000 for approximately 70 staff as of March 2006. Similarly, 11 of 19 peacekeeping chief procurement officers stated that their training opportunities and resources are inadequate. Some indicated that a single field individual’s attendance at an annual conference of chief procurement officers consumed the mission’s annual allotment for procurement training. Peacekeeping Department officials noted that the lack of resources to train field procurement officers is symptomatic of the general lack of professional development funding for all peacekeeping mission personnel. The UN has not established a career path for professional advancement for Procurement Service and peacekeeping procurement staff. Career paths, if well designed by management, can actively encourage and support staff to undertake progressive training and work experiences in order to prepare them for increased levels of responsibility. A procurement career path also would serve to recognize procurement as a specialized profession in the UN. A November 2005 consultant report recommended that the Secretariat establish and define career paths for Procurement Service management and staff. In field missions, most procurement chiefs told us that the absence of a career path is detrimental to the professional caliber of procurement staff. In addition, 14 of 19 field procurement officials told us that they do not have professional development plans for all of their staff. UN auditors also have expressed concern that UN managers do not give proper care to helping ensure the qualifications and integrity of procurement staff. Difficult staffing conditions and practices in both the Procurement Service and peacekeeping missions continue to expose the UN to significant risks. The Procurement Service reported that it lacks the staffing resources to implement recommended controls such as team-based rather than individual procurement, peer review of major bidding, and periodic rotation of staff. Peacekeeping staff said that staff turnover at the Procurement Service has also hurt the continuity of their operations. In the field, the Peacekeeping Department faces challenges in deploying qualified, experienced staff to missions, especially during the critical start-up phase. For example, OIOS noted that a staff member involved in leaking confidential information to a UN supplier had been appointed to a post on a short-term temporary duty assignment and was later competitively selected for the post. The Peacekeeping Department has had difficulties in retaining high-quality procurement staff for sustained periods in peacekeeping missions. For example, a peacekeeping official informed us that about 23 percent of procurement staff positions in peacekeeping missions were vacant as of December 2005. Peacekeeping officials informed us that a chronic lack of field procurement staff has helped to undermine existing control mechanisms. The peacekeeping procurement workforce is adversely affected by considerable staff turnover, especially in peacekeeping missions where UN staff must operate in demanding, unpredictable, and dangerous conditions. The current conditions of service in peacekeeping missions are not competitive with those of other UN entities or similar organizations, according to UN officials. For example, peacekeeping officials stated that peacekeeping mission staff are not able to have their families at their duty station and that the benefits and short-term nature of employment contracts for peacekeeping mission staff are disadvantageous compared with those at UN headquarters and other UN funds and programs. In March 2006, the Secretary-General issued proposals to revise the UN’s human resource policies to address the unfavorable conditions of service in peacekeeping missions. However, the UN has not provided specific facts regarding the substance and time frames for these revisions, and they would require the action and support of member states of the General Assembly to implement. The UN has failed to adopt the full range of ethics guidance for procurement officials despite repeated directives from the General Assembly in 1998 and 2005. Such guidance would include a declaration of ethics responsibilities for procurement staff and a code of conduct for vendors. Ethical principles are key elements of an internal control environment, and management plays an important role in providing guidance for proper behavior. Because of the fiduciary responsibilities held by procurement staff, the lack of specific ethics guidance increases the risk of fraud, waste and abuse. Earlier in 2005, a former UN procurement officer pled guilty to charges of accepting money from vendors seeking contracts. The UN has been considering the development of specific ethics policies for procurement officers for almost a decade. For example, on September 8, 1998, the General Assembly asked the Secretary-General to prepare additional ethics guidance for procurement officers “as a matter of priority.” In 2002, the Secretary-General acknowledged that separate procurement ethics guidance should be developed “pursuant to the request of the General Assembly.” In 2005, the General Assembly again asked the Secretary-General to issue ethical guidelines for those involved in the procurement process and called for “the early adoption of a code of conduct for vendors and a declaration of ethical responsibilities for all staff involved in the procurement process.” Despite other efforts to bolster ethics policies at the UN, the UN has yet to fully enact new ethics guidance for procurement officials. As we reported in October 2005, the Procurement Service has drafted such guidance. It would include a declaration of ethics responsibilities for procurement staff and embody a code of conduct for vendors. It would also outline current rules and regulations relating to procurement staff and address ethics standards for procurement staff on conflict of interest and acceptance of gifts. In addition, the draft ethics guidance would outline UN rules, regulations, and procedures for suppliers of goods and services to the UN. Since October 2005, the UN has made only limited progress toward addressing the directives of the General Assembly. UN officials stated that the UN has not established ethics guidance for procurement personnel due to resource constraints and the need for extensive consultations within the UN. In November 2005, a consultant review concluded that the Procurement Service lacked an effective and well-coordinated ethics program that makes ethics a fundamental element of the Procurement Service culture, including a single recognized and well-understood code of conduct governing ethics and integrity expectations and requirements for UN procurement staff. The UN has subsequently adopted new procedures that outline UN rules, regulations, and procedures for suppliers of goods and services to the UN. The UN also now requires all procurement officers to file financial disclosure statements. However, most of the other draft regulations continue to be reviewed within the UN. The UN has not set firm dates for their adoption. Department of Management officials informed us in April 2006 that rules for governing the conduct of staff engaged in procurement activities, including a declaration of ethical responsibilities, would be promulgated “shortly.” We found weaknesses in key procurement control activities that are intended to provide reasonable assurance that management’s directives are followed. UN procurement control activities include processes for (1) reviewing high-value procurement contracts, (2) considering vendor protests, (3) updating the procurement manual, and (4) maintaining qualified vendor rosters. The persistence of weaknesses in these areas indicates that the UN does not have reasonable assurance that its staff are complying with management policies and directives. We found that although UN procurement has increased sharply in recent years, the size of the Headquarters Committee on Contracts and its support staff remained relatively stable. The committee’s Chairman and members stated that the committee does not have the resources to keep up with its expanding workload. The number of contracts reviewed by the committee has increased by almost 60 percent since 2003. The committee members stated that the committee’s increasing workload was the result of the growth of UN peacekeeping operations, the complexity of many new contracts, and increased scrutiny of proposals in response to recent UN procurement scandals. Committee data from 2005 indicate that the average time taken to report a committee decision is within the time frames allowed by UN regulations. However, a senior committee official stated in March 2006 that this time had increased significantly in recent months due to the committee’s workload. He indicated that the average time to report a committee decision is now approaching 20 business days instead of 10. In addition, 8 of 19 peacekeeping field procurement officials stated that the committee only reviewed their cases in a moderately timely manner, while 5 felt that there was little to no extent of timeliness in these reviews. Concerns regarding the committee’s structure and workload have led UN auditors to conclude that the committee cannot properly review contract proposals. It may thus recommend contracts for approval that are inappropriate and have not met UN regulations. In a 2006 report, OIOS stated that the committee cannot determine if procurement officials had complied with regulations or if they had been unduly influenced by vendors. OIOS cited a report it had prepared in 2001 that concluded that the committee had not developed the tools needed to provide reasonable assurance that it could thoroughly and objectively evaluate contract proposals and add value to the procurement process. OIOS also reiterated its 2001 recommendation that the UN reduce the committee’s caseload and restructure the committee “to allow competent review of the cases.” Without an effective contract review process, the UN cannot provide reasonable assurance that high value contracts are undertaken in accordance with the Financial Regulations and Rules of the UN. The UN’s plans to address these issues are unclear. In January 2006, the Under-Secretary-General for Management stated that the committee should be restructured in some form in response to consultant concerns. However, OIOS reported in January 2006 that the Department of Management had been “unresponsive” to its recommendations regarding the committee. The committee Chairman stated that the committee’s four existing posts are all in support of the regular budget, and it needs additional support for reviewing contracts under the peacekeeping budget. The Chairman told us that the committee has requested that its support staff be increased from 4 to 7, but does not yet know if this will be approved. The Chairman also stated that raising the $200,000 threshold for contracts that require review by the committee would reduce the committee’s workload. The UN has not established an independent process to consider vendor protests, despite the 1994 recommendation of a high-level panel of international procurement experts that it do so as soon as possible. Such a process would provide reasonable assurance that vendors are treated fairly when bidding and would also help alert senior UN management to situations involving questions about UN compliance. An independent bid protest process is a widely endorsed control mechanism that permits vendors to file complaints with an office or official who is independent of the procurement process. The General Assembly endorsed independent bid protest in 1994 when it recommended for adoption by member states a model procurement law drafted by the UN Commission on International Trade Law. Several nations, including the United States, provide vendors with an independent process to handle complaints. At present, UN vendors cannot protest the handling of their bids to an independent official or office. The Procurement Service directs its vendors to file protests through a complaints ladder process that begins with the Procurement Service chief and moves up to his immediate supervisor. The majority of peacekeeping field procurement officers also stated that vendor protests were not reviewed by an independent body at their mission. Procurement Service and peacekeeping field staff told us that, in their opinion, there is no vendor demand for a more independent process. The lack of an independent bid protest process, as endorsed by the General Assembly and used by the United States and other nations, limits the transparency of the UN procurement process by not providing a means for a vendor to protest the outcome of a contract decision to an independent official or office. If handled through an independent process, vendor complaints could alert senior UN officials and UN auditors to the failure of UN procurement staff to comply with stated procedures. As a result of recent findings of impropriety involving the Procurement Service, the United Nations hired a consultant to evaluate the internal controls of its procurement operations. One of the consultant’s conclusions was that the UN needs to establish an independent bid protest process for suspected wrongdoing that would include an independent third-party evaluation, and arbitration, due process, and formal resolution for all reports. The UN has not updated its procurement manual since January 2004 to reflect current UN procurement policy. As a result, UN procurement staff may not be aware of changes to UN procurement procedures that have been adopted over the past 2 years. An organization’s control activities include taking action to make staff aware of its policies and procedures. The Procurement Service’s procurement manual is intended to guide UN staff in their conduct of procurement actions worldwide. All 19 Peacekeeping Department field procurement officials informed us that they have access to the manual at their mission, and all are somewhat to very familiar with it. The Procurement Service revised the manual in 2004 to address several problems. In 1999, we noted that the manual did not provide detailed discussions of procedures and policies.As revised by the Procurement Service, the manual now has detailed step-by-step instructions on the procurement process for both field and headquarters staff, including illustrative flow charts. It also includes more guidance that addresses headquarters and field procurement concerns, such as more specific descriptions of short- and long-term acquisition planning and the evaluation of requests for proposals valued at more than $200,000. In a decentralized organization with geographically dispersed field missions, it is essential that staff have access to the most current procurement policy so that they can be informed of and consistently comply with UN regulations. While 16 of 19 peacekeeping procurement officers acknowledged that the Procurement Service or the Peacekeeping Department provided them with timely updates on the most current procurement policies and procedures to a great or moderate extent, the procurement manual still does not contain the latest information concerning procurement policies and procedures. For example, the manual does not reflect the June 2005 delegation of procurement authority from the Department of Management to the Peacekeeping Department. The new delegation of authority allows peacekeeping field missions to purchase up to $1 million in goods or services in support of their core requirements (e.g., waste disposal services, fresh food, janitorial services, and building materials) without obtaining the prior approval of the Department of Management and the Headquarters Committee on Contracts. When we spoke with peacekeeping procurement officers in the field, one officer stated that his mission staff were not using this new delegation of authority because they were not sure whether it was official UN policy. Another peacekeeping procurement officer was unclear on some of the details for implementation of the new authority. Also missing from the procurement manual is a section regarding procurement for construction. In June 2005, a UN consultant recommended that the UN develop separate guidelines in the manual for the planning and execution of construction projects. These guidelines could be useful in planning the UN’s future renovation of its headquarters building. A Procurement Service official who took part in the manual’s 2004 revision stated that the Procurement Service had been unable to allocate the time and resources needed to update the manual since that time. In addition, in January 2006, OIOS found that changes in procurement policies were not always effectively disseminated, leading to possible inconsistency in the awarding of contracts. A UN consultant also has recommended that the Department of Management address the lack of a standard process for updating the procurement manual. Department of Management staff informed us that they plan to follow these recommendations. The UN does not consistently implement its process for helping to ensure that it is conducting business with qualified vendors. As a result, the UN may be vulnerable to favoring certain vendors or dealing with unqualified vendors. Approved vendor rosters are an important procurement control activity for limiting procurement to vendors that meet stated standards. Vendors apply for registration to the list and are then evaluated by the agency before placement on the approved list. Effective rosters are impartial and provide sufficient control over vendor qualifications. The UN has long had difficulties in maintaining effective rosters of qualified vendors. In 1994, a high-level group of international procurement experts concluded that the UN’s vendor roster was outdated, inaccurate, and inconsistent across all locations. It warned that inconsistent and nonstandardized rosters could expose the procurement system to abuse and fraud. In 2003, an OIOS report found that the Procurement Service’s roster contained questionable vendors. For example, one vendor appeared to have been awarded approximately $36 million in contracts without completing its registration to the list. Another vendor appeared to not have applied for registration to the list but was nonetheless awarded a $16.6 million contract. Although the Procurement Service corrected some of the noted weaknesses, OIOS concluded that, as of 2005, the roster was not fully reliable as a tool for identifying qualified vendors that could bid on contracts. To address such concerns, the Procurement Service became a partner in an interagency procurement vendor roster in 2004, which was intended to serve as a single roster for potential suppliers to apply for registration with participating organizations within the UN system and to be used by peacekeeping field missions. It was seen as a common system that would allow sharing of vendor performance information within the UN. The UN’s use of the interagency procurement vendor roster, however, has not fully addressed concerns regarding UN vendor lists. OIOS found that many vendors that have applied through the interagency procurement vendor roster have not submitted additional documents requested by the Procurement Service to become accredited vendors. In addition to the audited financial statements and income tax returns that a vendor must submit to be registered in the UN interagency vendor roster, the Procurement Service also requires the vendors to submit a copy of a certificate of incorporation, copies of quality certification standards for the goods and services to be registered, and letters of reference from at least three clients to which the vendor has provided its goods or services over the past 12 months. OIOS found that, as of December 2005, about 2,800 vendor applicants had not submitted the necessary documents for the Procurement Service to review. OIOS expressed concern that the Procurement Service vendor registration procedure remains vulnerable to procurement officials who might manipulate it to favor certain vendors. In addition, most Peacekeeping Department field procurement officials we spoke with stated that they prefer to use their own locally developed rosters instead of the interagency vendor roster because the local lists were shorter and contained more specific information regarding the goods and services available in the area near their field missions. Moreover, some field mission procurement staff also stated that they were unable to comply with Procurement Service regulations for their vendor rosters due to the lack of reliable vendor information in underdeveloped countries in which there were conflicts, as is common in peacekeeping missions. One procurement officer in the field stated that many of the local vendors could not prepare financial statements required by the Procurement Service and the interagency database because the country was just starting to recover from a war. Other field procurement officers noted that issues such as low levels of education and literacy and smaller vendors not being in a position to deliver internationally prevented vendors from registering on the interagency vendor list. OIOS reported in 2006 that peacekeeping operations were vulnerable to substantial abuse in procurement because of inadequate or irregular registration of vendors, insufficient control over vendor qualifications, and dependence on a limited number of vendors. The UN lacks a comprehensive risk assessment framework for identifying, evaluating and managing the procurement activities most vulnerable to fraud, waste, and abuse. UN officials and auditors have identified significant risks associated with headquarters and peacekeeping procurement operations. However, without a comprehensive risk assessment framework, the United Nations cannot have reasonable assurance that it allocates proper attention to procurement activities that could be most prone to fraud, waste, and abuse. OIOS has repeatedly identified procurement as one of the areas in the UN with significant potential for inefficiency, corruption, fraud, waste, and abuse. For example, OIOS issued recommendations relating to collusion and conflicts of interest between UN staff and vendors, accountability for theft of UN property, and misuse of UN equipment. In addition, senior UN officials identified several major risks impacting peacekeeping procurement. First, they stated that field procurement staff currently operate under regulations that do not always reflect differences inherent in operating in field locations. According to Peacekeeping Department officials, present UN procurement rules and processes are difficult to apply in peacekeeping missions during start-up. Second, field procurement staff work in demanding operating environments, especially during the first few months of a mission’s start-up, and must expediently resolve issues such as uncertainties regarding the free movement of goods, customs clearances, and taxation. Finally, the Peacekeeping Department stated that its missions do not have enough field procurement staff willing to serve under current conditions of service. An environment laden with such considerable risks demonstrates the need for management to establish a comprehensive framework for continuously identifying, analyzing, and managing risks in procurement operations. While Peacekeeping Department officials informed us that they have developed strategies to mitigate certain risks, they acknowledged that the UN does not have a formal risk management framework. Procurement Service officials stated that they do not systematically assess procurement risks. In addition, 13 out of 19 peacekeeping field procurement chiefs stated that they do not prepare formal or informal risk assessments for their missions. A November 2005 consultant report found that the Procurement Service does not have a formal and regular risk management process to assess compliance, evaluate risks, implement controls, and bring serious issues to the attention of management. Similarly, OIOS reported in January 2006 that UN management had failed to establish and implement a formal procurement risk management strategy. An OIOS official also identified several risk areas in the UN’s procurement operations relating to individual span of control and supervision, segregation of functions, access to bids, and the use of brokers and subcontractors. Without a risk management strategy for procurement, the UN would not be fully equipped to identify areas that would require stronger oversight. For example, start-up peacekeeping missions constitute a high-risk area because they require a significant volume of procurement activity under demanding time pressures in challenging geographic regions. Similarly, UN efforts to adjust procurement rules to suit field conditions or to reduce the caseload of the Headquarters Committee on Contracts would be hampered by the absence of a risk assessment process. The UN lacks an integrated information management system, which adversely impacts its ability to oversee field and headquarters procurement operations. A sound internal control system requires that relevant, reliable information be recorded and communicated in a timely manner to equip management to carry out their responsibilities. The UN is exposed to risks arising from fragmented procurement and financial systems in both headquarters and field procurement operations. We found that Procurement Service staff had limited access to data on procurement activities conducted locally in field missions. No single procurement database supports both the Procurement Service and peacekeeping missions; they utilize two separate procurement systems. As a result, no single UN entity was able to readily provide us with data on the total value of purchase orders and contract awards issued by each peacekeeping mission, including actions procured both locally and by the Procurement Service at headquarters. These issues raise concerns about the ability of headquarters managers to comprehensively oversee procurement by the various field missions. Moreover, the Peacekeeping Department has established only limited reporting requirements for field missions and lacks a full-time manager at headquarters to review the reports of all peacekeeping missions. A November 2005 consultant study also found that the fragmented information systems limit the Procurement Service’s ability to generate comprehensive and timely management reports. Furthermore, the study found that the current information systems did not use fraud detection tools and exception reports. Such reports, if used, would serve to alert management of procurement actions requiring additional monitoring. For example, a report of cumulative awards to vendors would help management identify instances of inadequate competition or bid rigging by vendors. Procurement officials lack a systematic and standardized approach for conducting management oversight of procurement activity. Instead, to monitor procurement activity, they rely heavily on UN oversight entities. Although these entities have identified numerous weaknesses in the procurement process, the UN has not always implemented their recommendations to correct the weaknesses. In many cases, the oversight entities found similar problems in different field missions and issued similar recommendations. Yet, UN management does not review the audit reports systematically to determine whether changes to policy or processes are warranted to address recurring problems. In addition, the UN does not have a mechanism that provides reasonable assurance that corrective actions, once taken, are institutionalized. Further, the UN does not have an effective process for consistently holding procurement staff accountable for their actions, and it is too early to determine whether a newly established management performance board will do so. Internal control standards state that while audits and other reviews of controls by internal and external auditors can be useful, the responsibility for establishing mechanisms to monitor and evaluate program activities rests with managers. Ongoing monitoring occurs during normal operations and includes regular management and supervisory activities, such as helping to ensure that staff follow UN procurement policies. As noted earlier, the UN lacks an effective organizational structure for managing procurement at headquarters and in the field. In addition, UN procurement managers do not have a standardized and systematic process to monitor headquarters or field procurement during normal operations as part of their regular management duties. In 2006, OIOS reported that the UN does not have sufficient controls for providing reasonable assurance of compliance with the Financial Regulations and Rules of the UN and that important controls were lacking, existing controls were often bypassed, and senior officials did not take proper care to design controls for overseeing peacekeeping procurement operations. Senior procurement officials at headquarters stated that they rely heavily on the auditors to monitor procurement activity. Also, in 2005 a consultant reported that the Procurement Service cannot adequately monitor procurement activities due to (1) inadequate reporting of procurement activity, and (2) a lack of system support for management reporting and financial transaction information. Field procurement managers also lack a standardized and systematic approach for monitoring procurement activity to provide reasonable assurance that the Financial Regulations and Rules of the UN are followed. Although UN chief procurement officers reported taking a variety of actions to determine whether their staff comply with the Financial Regulations and Rules of the UN, there were considerable differences in the steps reported, both in terms of what the officers were doing and the degree of their involvement. For example, while one chief procurement officer reported that he reviewed all procurement files, another stated that he did not have the time or resources to provide any oversight. Most field procurement officials told us that the local committee on contracts at their mission is generally helpful and timely in reviewing contract proposals. However, UN auditors have found that noncompliance with the UN’s Financial Regulations and Rules has resulted in financial losses to the UN. For example, in its audit of peacekeeping operations in 2004, the Board of Auditors found that one contractor was declared bankrupt in May 2002 and could not fulfill contract requirements. A performance bond for $1.4 million should have been provided, as required by the contract, but it could not be located by UN officials. In addition, OIOS found instances where procurement staff split requisitions to come in under review thresholds; requisitioners sometimes procured items directly; vendors were not properly registered; and contracts were awarded without sufficient justification. Also, most UN chief procurement officers report taking a variety of actions to determine whether vendors comply with the Financial Regulations and Rules of the UN; however, there were considerable differences between the steps the officers were taking and the degree of their involvement. Most chief procurement officers stated that they inform vendors of the UN regulations that need to be followed and that the regulations were also in the contracts. Some officers stated that they had an independent unit for contracts management in their mission that would monitor vendors for contract performance and hold meetings with the vendors to discuss their performance, while others said that they used vendor performance reports to monitor vendors. However, UN auditors have noted that in many instances, procurement staff did not prepare vendor performance reports. For example, in 2005, the Board of Auditors reported that for seven of a sample of nine contracts worth nearly $163 million, no supplier evaluations could be found for $160.7 million of those purchases. Internal controls guidance states that monitoring should provide reasonable assurance that the findings of audits and other reviews are promptly resolved. OIOS and the Board of Auditors conduct regular examinations of procurement. In many cases, the oversight entities have found similar problems and issued similar recommendations in multiple field missions. UN management, however, does not systematically review the audit reports to determine whether changes to policies or processes are warranted to address systemic problems. OIOS conducts smaller-scope audits on specific procurement policies and procedures at one or more missions or offices. Over the past 5 years, OIOS has identified numerous weaknesses in areas such as management accountability, planning and needs assessments, vendor qualifications and rosters, and contract reviews. OIOS’s recommendations-tracking database indicates that the Procurement Service and the Peacekeeping Department have implemented about 88 percent of about 330 procurement recommendations and about 85 percent of approximately 170 critical procurement recommendations. However, a recent study by a UN consultant concluded that while OIOS provides spot audit coverage, OIOS has lacked the resources to provide coverage that would be sufficient to prevent breakdowns in internal controls. The Board of Auditors serves as the UN’s external auditor and is charged with auditing the UN financial statements biannually, including those of the Procurement Service. The board also audits peacekeeping field operations annually, including field procurement. According to board staff, the board devotes about half of its operations to procurement oversight, primarily peacekeeping oversight. Over the past 4 years, the board identified weaknesses concerning procurement ethics, vendor performance reports, and the absence of a comprehensive internal antifraud plan. According to some board officials, the UN could further benefit from procurement audits of system contracts, the bid-tendering process, and air operations. The board also pointed to the lack of an adequately trained and professional procurement workforce and stated that the UN should create a professional cadre of procurement officers. In addition, the board reported that the UN has not fully implemented key audit recommendations, such as having peacekeeping missions identify training needs for procurement officers and establishing a time frame for implementing ethical guidelines for procurement staff. According to board officials, based on the board’s audit of peacekeeping operations for the period ended June 30, 2003, of 69 recommendations, 26 were implemented; 33 were in progress; and 10 had not yet started. The Joint Inspection Unit (JIU) conducts UN-wide evaluations, inspections, and investigations and prepares reports and notes identifying best practices and opportunities for cost savings within the UN system. JIU has called attention to weaknesses in UN procurement training. In 2004, JIU reported that training funds for procurement staff were small relative to procurement volumes and amounted to about 0.01 percent of aggregate procurement expenditures with the majority of UN entities allocating no resources for that purpose. The UN had not fully addressed JIU’s call for additional funding for procurement training and certification for procurement officers. However, JIU does not track separately and report on the implementation rate of its procurement recommendations. In many cases, the oversight entities have found similar problems in different field missions and issued similar recommendations. Yet, UN management does not review the audit reports in a methodical manner to determine whether changes to policy or processes are warranted to address systemic problems or have a mechanism to help ensure that corrective actions, once taken, are institutionalized. According to UN officials, the UN currently does not systematically track all of the information from the oversight entities and does not perform any systematic analysis of what the three oversight entities are reporting to identify systemic weaknesses or lessons learned. The UN had begun exploring the possibility of establishing a high-level follow-up mechanism to help ensure proper and systematic implementation of oversight recommendations and share audit-related information and lessons learned, where appropriate; however, such a mechanism has yet to be implemented. At the request of the General Assembly, in 2005 the Secretary-General called for a high-level mechanism to be established to help ensure proper implementation of all oversight recommendations. As proposed, this oversight committee would (1) provide independent advice to the Secretary-General on all Secretariat activities relating to oversight and investigations, (2) advise the Secretary-General on the response of management to the recommendations made by oversight bodies and on the manner in which the implementation of those recommendations can have the greatest impact, and (3) help ensure systematic implementation of recommendations that have been approved by the General Assembly or accepted by the Secretariat. However, the UN has yet to implement such a mechanism and plans for doing so are uncertain. The UN does not have an effective mechanism in place for holding procurement staff accountable for their actions. Internal control standards state that management should maintain the organization’s ethical tone, provide guidance for proper behavior, remove temptations for unethical behavior, and provide discipline when appropriate. OIOS recently reported that the UN’s lack of enforcement of accountability and reluctance to investigate the mismanagement of resources, fraud, and abuse of delegated authority has increased the risk of corrupt practices in UN procurement. In addition, peacekeeping officials agree that a lack of enforcement of accountability in some locations has led to a pattern of poor control over procurement practices. Both OIOS and peacekeeping officials agree that there is evidence that some senior managers in the peacekeeping and management departments may not have been reasonably diligent in discharging their duties or helping to ensure that adequate internal controls and procedures are in place to safeguard the organization’s assets. In May 2005, the Secretary-General abolished an accountability panel established in 2000 because it was ineffective and replaced it with a management performance board. This panel was to help ensure that the UN addressed findings of its oversight review bodies from a systemic perspective and to reinforce existing accountability mechanisms. It consisted of the Deputy Secretary-General, acting as the chairperson, and four members at the under-secretary-general level, appointed by the Secretary-General. The new management performance board will focus primarily on the performance and accountability of senior managers and advise the Secretary-General, according to a UN official. Chaired also by the Deputy Secretary-General, this board is intended to help ensure that the UN addresses serious managerial issues identified by its oversight bodies in a timely manner, monitors senior managers in their exercise of their authority, and reviews UN justice proceedings for management accountability purposes. The board also consists of two members at the under-secretary level and an external expert in public sector management. A UN official stated that the board has met twice as of February 2006. The board has been operational for too short a time to allow a determination of its effectiveness. The United States has recently taken steps to advocate procurement reform at the UN as part of its efforts to advance overall UN management reforms. The U.S. Mission has publicly stated that accountable, cost- effective, and transparent procurement practices at the UN are vital to UN operations and are therefore an ongoing management priority. The Permanent U.S. Representative to the UN has recognized management weaknesses in the procurement area, particularly with regard to the unclear lines of authority and responsibility between the Departments of Management and Peacekeeping. He has indicated that clarifying these lines of authority would need to be part of overall UN management reform. The U.S. Mission also has pressed to have peacekeeping procurement concerns brought to the attention of the Security Council, despite the objections of some other member states. As a result, a senior UN official briefed the Security Council in February 2006 on OIOS’s audit of procurement practices in Security Council-mandated peacekeeping missions. A staffing vacancy at the U.S. Mission to the UN in New York may have limited U.S. influence in encouraging and formulating proposals for procurement reform. The position of ambassador to the UN for management and reform was vacant from February 2005 until late March 2006. According to State officials, statements from ambassador-level representatives in UN bodies have more impact and influence than statements from other U.S. representatives. This key vacancy represents a missed opportunity for the United States in the formation of the UN management reform agenda agreed to at the World Summit in 2005. As a result, officials said, the U.S. Mission had limited capacity to influence reform. Long-standing weaknesses in the UN’s internal controls over procurement have left UN procurement funds highly vulnerable to fraud, waste, and abuse. Many of these weaknesses have been known and documented by outside experts and the UN’s own auditors for more than a decade. The UN, however, has not demonstrated the sustained leadership needed to correct these weaknesses. It has instead undertaken piecemeal reforms while failing to clearly establish management accountability for correcting procurement weaknesses. The negative effects of this lack of UN leadership in procurement have been compounded by a peacekeeping program that has more than quadrupled in size since 1999 and may expand even further. We recommend that the Secretary of State and the Permanent Representative of the United States to the UN work with other member states to encourage the Secretary-General to take the following eight actions: establish clear and effective lines of authority and responsibility between headquarters and the field for UN procurement; enhance the professionalism of the UN procurement workforce by establishing a comprehensive procurement training program and a formal career path; provide the Headquarters Committee on Contracts with an adequate structure and manageable workload for contract review needs; establish an independent bid protest process for UN vendors; take action to keep the UN procurement manual complete and updated on a timely basis and complete the ethics guidance; develop a consistent process for providing reasonable assurance that the UN is conducting business with only qualified vendors; develop a strategic risk assessment process that provides reasonable assurance of systematic and comprehensive examination of headquarters and field procurement; and standardize and strengthen monitoring of procurement activities by procurement managers, including actions aimed at helping to ensure that oversight agencies’ recommendations are implemented and that officials are held accountable for their actions. We also recommend that the Secretary of State report to the Congress annually regarding UN progress in reforming its procurement process, with particular attention to the status of UN progress in addressing the above recommendations. The Department of State provided written comments on a draft of this report that are reproduced in appendix III. The department stated that it welcomed our report and endorsed its recommendations. The UN did not provide us with written comments. State and UN officials provided us with a number of technical suggestions and clarifications that we have addressed, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to other interested Members of Congress, the Secretary of State, and the U.S. Permanent Representative to the United Nations. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To assess internal controls in the United Nations (UN) procurement process, we used a framework that is widely accepted in the international audit community and has been adopted by leading accountability organizations, including the International Organization of Supreme Audit Institutions, the U.S. Office of Management and Budget, and GAO. Specifically, we assessed five key elements of internal control: (1) the control environment, (2) control activities, (3) risk assessment, (4) information and communications, and (5) monitoring. Our review focused primarily on the Department of Management, the United Nations Procurement Service (which reports to the Department of Management), and the Department of Peacekeeping Operations (Peacekeeping Department), including 19 field missions. To assess the control environment for UN procurement, we reviewed the UN’s procurement manual, organizational structure, financial regulations and rules, and information obtained from the Headquarters Committee on Contracts. We also examined training and staffing data, information on ethics guidance, and the UN Office of Internal Oversight Service’s (OIOS) 2006 report on peacekeeping procurement activities. To assess the control activities, risk assessments, and information and communications for UN procurement, we reviewed certain policies, procedures, and mechanisms that the UN has in place to provide reasonable assurance that staff comply with directives. Specifically, we reviewed (1) the mechanism that the UN has established to review procurement contracts; (2) approaches to addressing vendor grievances, updating the procurement manual, and the maintenance of qualifying vendor rosters; and (3) approaches to assessing procurement risk to the UN, as well as the UN’s information management system for procurement. To assess procurement monitoring, we reviewed the structure and mandates of OIOS, the UN Board of Auditors, and the UN Joint Inspection Unit (JIU). We also reviewed their roles in monitoring UN procurement activities. We also reviewed the board’s audited financial statements and reports for Secretariat and peacekeeping operations, which includes procurement findings and recommendations; OIOS and JIU reports containing procurement findings and recommendations; annual reports of the monitoring entities covering the status of implementation of their recommendations, as well as the process they have in place to track their recommendations; resolutions and reports on the UN Secretariat Accountability Panel, the Management Performance Board, and the Oversight Committee. We discussed OIOS’s process for monitoring and reporting on its recommendations with relevant officials. While the evidence these officials have obtained indicates that the data are generally reliable, the officials said they would like to perform more testing of recommendations that have been implemented but lack the resources to do so. We obtained the recommendations data set from OIOS and performed basic reliability checks. Based on our interviews and checks, we determined the data were sufficiently reliable for the purposes of this report. In addition, to assess the control environment and other standards, we prepared an interview instrument and conducted a series of structured interviews with the principal procurement officers at each of the Peacekeeping Department’s 19 missions in Africa, Asia, Europe, and the Americas (see fig. 5). We developed the interview questions based on our audit work, discussions with UN officials, and review of questionnaires conducted of headquarters staff. The structured interview instrument included open- and closed-ended questions, and covered various topics, including the background of procurement staff at each mission, such as experience, training, and certification; information on the types and volume of procurement at each mission; and questions regarding integrity, openness, competitiveness, and accountability. We conducted pretests of the instrument to provide reasonable assurance that our questions were clear and would be interpreted correctly. We addressed possible interviewer bias by ensuring that all respondents received copies of the instrument ahead of time and had them available when we conducted our interviews by telephone. To examine U.S. government efforts to support reform of UN procurement, we reviewed position statements and various program documents. We also met with senior Department of State officials in Washington, D.C., and senior officials with the U.S. Permanent Mission to the UN in New York. In addition, we reviewed reports prepared in 2005 by consulting firms hired by the UN to assess its procurement process. In addition, we reviewed reports by the Oil for Food Independent Inquiry Committee, the UN Panel on United Nations Peace Operations, and the UN High-Level Expert Procurement Group. We interviewed officials of the Department of Management, the Department of Peacekeeping Operations, the Headquarters Committee on Contracts, the UN Procurement Service, OIOS, and the Board of Auditors in New York. We performed our work from April 2005 through March 2006 in accordance with generally accepted government auditing standards. In addition to the person named above, Phyllis Anderson, Assistant Director; Pierre Toureille; Kristy Kennedy; Clarette Kim; Barbara Shields; and Lynn Cothern made key contributions to this report. Jaime Allentuck, Martin De Alteriis, Bonnie Derby, Timothy DiNapoli, Mark Dowling, Etana Finkler, John Krump, and James Michels also provided technical assistance. United Nations: Funding Arrangements Impede Independence of Internal Auditors. GAO-06-575. Washington, D.C.: April 25, 2006. United Nations: Lessons Learned from Oil for Food Program Indicate the Need to Strengthen UN Internal Controls and Oversight. GAO-06-330. Washington, D.C.: April 25, 2006. Peacekeeping: Cost Comparison of Actual UN and Hypothetical U.S. Operations in Haiti. GAO-06-331. Washington, D.C.: February 21, 2006. United Nations: Preliminary Observations on Internal Oversight and Procurement Practices. GAO-06-226T. Washington, D.C.: October 31, 2005. United Nations: Sustained Oversight Is Needed for Reforms to Achieve Lasting Results. GAO-05-392T. Washington, D.C.: March 2, 2005. United Nations: Oil for Food Program Audits. GAO-05-346T. Washington, D.C.: February 15, 2005. United Nations: Reforms Progressing, but Comprehensive Assessments Needed to Measure Impact. GAO-04-339. Washington, D.C.: February 13, 2004. United Nations: Progress of Procurement Reforms. GAO/NSIAD-99-71. Washington, D.C.: April 15, 1999.
For more than a decade, experts have called on the United Nations (UN) Secretariat to correct serious deficiencies in its procurement process. Recent evidence of corruption and mismanagement in procurement suggests that millions of dollars contributed to the UN by the United States and other member states are at risk of fraud, waste and abuse. During the last decade, UN procurement has more than tripled to more than $1.6 billion in 2005, largely due to expanding UN peacekeeping operations. More than a third of that amount is procured by UN peacekeeping field missions. To review the UN's internal controls over procurement, GAO assessed key control elements, including (1) the overall control environment and (2) specific control activities aimed at providing reasonable assurance that staff are complying with directives. Weak internal controls over UN headquarters and peacekeeping procurement operations expose UN resources to significant risk of waste, fraud, and abuse. The UN's overall control environment for procurement is weakened by the absence of (1) an effective organizational structure, (2) a commitment to a professional workforce, and (3) specific ethics guidance for procurement staff. GAO found that leadership responsibilities for UN procurement are highly diffused. While the UN Department of Management is responsible for UN procurement, field procurement staff are instead supervised by the UN Department of Peacekeeping Operations, which currently lacks the expertise and capacities needed to manage field procurement activities. Also, the UN has not demonstrated a commitment to maintaining a qualified, professional procurement workforce. It has not established training requirements or a procurement career path. In addition, the UN has yet to establish specific ethics guidance for procurement staff in response to long-standing mandates by the UN General Assembly, despite recent findings of unethical behavior. GAO also found weaknesses in key control activities. For example, the UN has not addressed workload and resource problems that are impeding the ability of a key committee to review high-value contracts. Also, the UN has yet to establish an independent process to review vendor complaints, despite long-standing recommendations that it do so. In addition, the UN has not updated its procurement manual since 2004. As a result of these and other weaknesses, many millions of dollars in U.S. and other member state contributions could be vulnerable to fraud, waste, and abuse.
When developing its annual budget submission, BOP uses three general steps to estimate costs for its two budget accounts—the Salaries and Expenses account (known as its operational budget) and its Buildings and Facilities account. First, BOP estimates cost increases for maintaining the current level of services for operations as provided in the prior year’s enacted budget. These include costs to address mandatory staff pay raises and benefit increases, inmate medical care, and utilities. BOP primarily analyzes historical obligations from the past five years to identify average annual operating cost increases. BOP also considers economic indicator information to estimate general inflationary cost increases, using data from the Bureau of Labor Statistics Consumer Price Index, among other sources. Second, BOP projects inmate population changes for the budget year and for several years into the future. BOP uses a modeling program that identifies each inmate as a unique record tied to variables such as conviction year, sentence term, and conviction type, with data obtained from a variety of sources, including the Administrative Office of the U.S. Courts, the U.S. Sentencing Commission, and the Executive Office for U.S. Attorneys. The model identifies the number of inmates currently in BOP’s system and the length of those inmates’ sentences, as well as the number of inmates estimated to enter the BOP system and the length of their sentences. For example, for the fiscal year 2010 annual budget submission, BOP projected a net growth in its inmate population of 4,500 inmates. Third, BOP estimates costs to both house the projected number of new inmates, including building and facility requirements, and fund any new initiatives. According to BOP, a rising inmate population is the primary driver of new service costs (see figure 1 for graph showing federal inmate population growth from fiscal years 2000 through 2009). Thus, for any budget year, BOP uses inmate population projections to determine the necessary bedspace to house additional inmates. BOP estimates these associated incarceration costs by (1) determining how to distribute the incoming prisoners across newly activated facilities, existing facilities, or contract facilities; and (2) calculating staffing and other operational costs to manage the additional inmates at its facilities. BOP also identifies and estimates costs for new initiatives, such as the activation of a new BOP facility, by reviewing the proposals submitted by its divisions and regional offices, as well as historical data on costs for implementing such initiatives. For its Buildings and Facilities account, BOP identifies new program costs associated with new construction and maintenance and repair of existing facilities. Using its long-term inmate population projections, BOP considers new construction proposals based on need, funding, and the anticipated speed of construction. BOP estimates construction costs largely by using analogous building costs for similar security level facilities, as well as considering assumptions, such as the rate of inflation and when potential construction would begin. BOP ranks maintenance and repair proposals by assigning safety the highest priority and estimates costs based on information it obtains from a construction cost estimation company. BOP’s methods for estimating costs in its annual budget requests to DOJ largely reflect the best practices outlined in GAO’s Cost Estimating and Assessment Guide. Specifically, BOP followed a well-defined process for developing a mostly comprehensive, well documented, accurate, and credible cost estimate for fiscal year 2008. For example, BOP used relevant historical cost data and considered adjustments for general inflation when estimating costs for its budget request to DOJ. Moreover, BOP’s methods for projecting inmate population changes have been largely accurate. For example, we found BOP’s projections were accurate, on average, to within 1 percent of the actual inmate population growth from fiscal year 1999 through August 20, 2009. We identified two areas where BOP could strengthen its methods for estimating costs in its annual budget submission. First, according to best practices described in GAO’s Cost Estimating and Assessment Guide, it is better for decision makers to know the range of potential costs that surround an estimate and the reasons behind what drives that range rather than just having a point estimate from which to make their decision. An uncertainty analysis provides a range of costs that span a best and worst case spread. While not required by OMB or DOJ in annual budget development guidance, conducting an uncertainty analysis of this kind is a best practice. BOP has not conducted an uncertainty analysis, and therefore has not quantified the level of confidence associated with its cost estimate. By providing the results of such analysis to DOJ, BOP officials could share advance information on the probability and associated risks of operating expenses exceeding enacted funding levels— a situation BOP faced in fiscal year 2008. Second, during our review of documentation for BOP’s fiscal year 2008 cost estimate, we sometimes required the guidance of BOP budget analysts to identify backup support. This was because the documentation BOP provided was insufficient to allow someone unfamiliar with the budget to locate detailed corroborating data. For example, in reviewing BOP’s fiscal year 2008 cost estimate for a health service initiative related to expanding kidney dialysis treatment for inmates, we required a budget official’s assistance in locating supporting formulas used to calculate the estimate. Best practices for cost estimation include providing enough detail so that the documentation serves as an audit trail that allows for clear tracking of cost estimates over time. By documenting all steps for developing its budget cost estimate, BOP would be better positioned to recreate its estimates in the event of attrition within its budget office among those who developed initial budget cost estimates. In providing feedback on our initial findings, BOP budget officials indicated that taking these steps would strengthen their methods for estimating costs in their annual budget submission to DOJ. BOP’s costs for key operations to maintain basic services, such as those for inmate medical care and utilities, exceeded the funding levels requested in the President’s budget from fiscal years 2004 through 2008, limiting BOP’s ability to manage its growing inmate population. During this period, BOP’s annual non-salary inmate medical care and utilities costs exceeded funding levels in the President’s budget request by a total of about $131 million and $55 million, respectively, largely due to inflation and inmate population growth. According to BOP, from fiscal years 2004 through 2008, BOP’s annual non-salary inmate medical care costs increased by a total of about $146.5 million. In contrast, during this period, the President’s budget requested funding increases for non-salary inmate medical care totaling approximately $15.4 million. According to BOP, from fiscal years 2004 through 2008, BOP’s annual utilities costs increased by a total of $87 million. In contrast, during this period, the President’s budget requested funding increases for utilities totaling approximately $31.6 million. Table 1 compares BOP’s rates of annual cost growth due to inflation and inmate population growth with the President’s budget requests for funding for non-salary inmate medical care and utilities from fiscal years 2004 through 2008. When BOP has not received funding to cover the operational cost increases it has incurred, in some years it has used Salaries and Expenses funding planned for other areas to cover these costs. For example, one of BOP’s highest priorities is to increase staffing levels of corrections officers. However, BOP officials reported using Salaries and Expenses account funds initially planned for hiring additional corrections officers in fiscal years 2008 and 2009 to instead cover base operations cost increases related to inmate medical care, utilities, and personnel salary and benefit adjustments that were unfunded in the President’s budget requests. As with any other DOJ component, BOP’s budget requests are governed by DOJ and OMB budget development guidance. For example, DOJ budget development guidance for fiscal years 2008 and 2009 required components to limit cost growth for current services to no more than 4 percent greater than prior year levels. DOJ reported that this guidance was a general instruction given to all components, but recognized that BOP is different because its costs are less discretionary since BOP does not control the number of inmates for which it must care. In this way, DOJ reported that it did not automatically reject budget submissions from BOP that exceeded the cap, but instead required BOP to submit substantive information to justify need. DOJ also reported that OMB does not automatically provide funds for inflationary cost increases. DOJ cited OMB policy stating that inflationary adjustments for discretionary costs (such as utilities) can include some, all, or no allowance for inflation. DOJ officials reported that OMB typically does not include general inflationary adjustments that DOJ submits on behalf of BOP. Nonetheless, DOJ has reported to OMB that other DOJ components could reduce operations, implement across-the-board hiring freezes, and implement policy changes that would reduce costs if faced with funding shortfalls similar to what BOP has faced in its operations budget. However, DOJ reported that BOP has already implemented significant reductions to programs and streamlined and centralized administrative functions to eliminate 2,300 positions. DOJ also reported that BOP has limited flexibility because almost all of BOP’s operational costs are devoted to staff salaries and provision of services. According to BOP data, in fiscal years 2007 and 2008, 99.5 percent of BOP’s Salaries and Expenses budget was fixed for its operations for paying staff salaries and providing services to house and care for the inmate population. In each of the last 2 fiscal years, BOP has needed additional funding to meet its operating costs for managing its growing inmate population. However, we found that BOP’s cost estimation methods largely reflect GAO’s cost estimating best practices. Furthermore, BOP officials reported, and DOJ officials acknowledged, that BOP has already implemented significant reductions in operations costs, such as by eliminating positions and centralizing administrative functions. Given BOP’s unique responsibility for managing this population, and its limited discretion when costs for key operations exceed funding levels, it is especially important for BOP to develop accurate cost estimates and clearly convey to decision makers the potential risk of costs exceeding funding levels. In light of these circumstances, BOP’s budget cost estimation practices could be strengthened in two ways. First, although BOP is not required to report in its annual budget submission the extent to which actual costs may be expected to vary from cost estimates, we have identified the provision of an uncertainty analysis as a best practice. If BOP identified its level of cost estimation confidence and provided this information to DOJ, DOJ could more fully understand the range of potential costs—and the potential need for more funding—if estimating assumptions for key cost drivers, such as inmate population growth, do not hold true. Second, by improving documentation of all steps for developing its cost estimate, BOP would be better positioned to re-create its estimates in the event of attrition within its budget office among those who developed initial cost estimates. To improve transparency in BOP’s cost estimation process, as well as DOJ’s annual budget formulation and justification process, and to provide DOJ with more detailed information to consider when deliberating its budget proposal for BOP, we recommend that the Attorney General take the following two actions: instruct the BOP Director to require the BOP budget staff to conduct an uncertainty analysis quantifying the extent to which operations costs could vary due to changes in key cost assumptions and submit the results along with budget documentation to DOJ so that DOJ could be aware of the range of likely costs and BOP’s associated confidence levels; and instruct the BOP Director to require the BOP budget staff to improve documentation of calculations used to estimate its costs. We provided a draft of this report to DOJ for its review and comment. The BOP Director provided written comments on this draft and concurred with our findings and recommendations. BOP stated that including the results of an uncertainty analysis in the budget document would provide DOJ, OMB, and Congress better context for decision making and stated that it would include such analysis in preparation of its 2012 budget submission. BOP also stated that if time permits, it would work with DOJ and OMB to incorporate an uncertainty analysis into the President’s 2011 budget. BOP’s comments are reproduced in appendix II. We are sending copies of this report to the Attorney General and interested congressional committees. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-9627 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. methods for estimating costs for its annual budget submission, we compared the documents BOP provided supporting its budget estimates for fiscal years 2004 to 2008 with information contained in the President's budget request for BOP for those years to determine the consistency of the information. We also interviewed agency officials knowledgeable about controls in place to maintain the integrity of (1) inmate population and sentenced offender data BOP used to populate its inmate population projection model for fiscal years 1999 to 2009 and (2) data on annual operations costs BOP reported between 2004 and 2008, including inmate medical care and utilities. As a result, we determined that the data were sufficiently reliable for the purposes of this report. 1. instruct the BOP Director to require BOP budget staff to conduct an uncertainty analysis quantifying the extent to which operational costs could vary due to changes in key cost assumptions—and submit the results, along with budget documentation, to DOJ so that DOJ can be aware of the range of possible costs and BOP's confidence levels associated with each point along the range; 2. instruct the BOP Director to require the budget staff to improve documentation of calculations used to estimate its costs. DOJ and BOP generally agreed with our findings and provided technical comments, which we integrated into our findings as appropriate. The S&E account—known as BOP’s operations budget—includes sub-accounts covering costs for staffing; medical care; food; and utilities, such as water and gas. In fiscal year 2009, staffing costs for employee salaries comprised about 60 percent of this account. The B&F account has sub-accounts covering costs for design and construction of new facilities and modernization and repair (M&R) of existing facilities. S&E expenses have accounted for the vast majority of BOP’s annual enacted budget from fiscal years 1999 through 2008—averaging about 90 percent. The President’s fiscal year 2010 budget request for BOP’s S&E and B&F accounts totals $6.1 billion, which is 23 percent of DOJ’s $26.7 billion budget. Figure 2 compares the President’s request for BOP to its enacted funding levels from fiscal year 1999 through 2009, and figure 3 shows the composition of the President’s request (S&E versus B&F) for BOP over the same period. staff ratios leads to an increase in serious violence among inmates.8 As of August 1, 2009, BOP reported being staffed at 34,829—about 88 percent of its authorized Full Time Equivalent (FTE) staffing level of 39,692. About half of its authorized positions are for corrections officers. Figure 4 compares BOP S&E staffing levels to its inmate population beginning in fiscal year 2000. Department of Justice, Federal Bureau of Prisons, The Effects of Crowding and Staffing Levels in Federal Prisons on Inmate Violence Rates (Washington, D.C., 2005). These costs include adjustments to address mandatory staff pay raises and benefit increases, inmate medical care, and utilities across BOP’s 115 facilities. BOP primarily analyzes historical obligations from the last 5 years to identify average annual operating cost increases. BOP also considers economic indicator information to estimate general inflationary cost increases, using data from the Bureau of Labor Statistics Consumer Price Index and other sources. BOP uses a modeling program that identifies each inmate as a unique record tied to variables such as conviction year, sentence term, and conviction type. BOP’s model uses data and information from a variety of sources, including the Administrative Office of the U.S. Courts, the U.S. Sentencing Commission, and the Executive Office for U.S. Attorneys to identify Number of inmates currently in prison and the length of their sentences, and Number of inmates estimated to enter prison and the length of their sentences. For the fiscal year 2010 annual budget submission, BOP projected a net growth of 4,500 inmates. According to BOP, a rising inmate population is the primary driver of new service costs. For the budget year, BOP uses inmate projections to determine necessary bedspace to house additional inmates. For future years, BOP uses inmate projections to plan for long term capacity needs, including new construction and arrangements for contract confinement. BOP estimates the associated incarceration costs by (1) determining how to distribute incoming prisoners across newly activated facilities, existing facilities, or privately operated facilities, and (2) calculating staff and other operational costs at each facility type. BOP identifies and estimates costs for new initiatives/program increases, such as activation of a new BOP prison facility, by reviewing the proposals submitted by its divisions and regional offices, and historical data. proposals based on need, funding, and the anticipated speed of construction. BOP estimates construction costs (for new prisons or expansions to existing facilities) largely by using analogous building costs for similar security level facilities. Cost estimates are also based on assumptions, including the rate of inflation and when construction will begin. M&R project proposals are ranked by assigning safety the highest priority, with lesser importance given to improving accessibility and updating facilities more than 50 years old. BOP estimates costs for replacement values through information it obtains from a construction cost estimation company. Since fiscal year 2005, OMB has placed a moratorium on new BOP prison construction because OMB has focused on contracting with private prisons to address bedspace needs. However, BOP has identified new construction plans and included proposals for new construction as part of its capacity plan. one and substantially met three of these four practices. The following explains the definitions we used in assessing BOP’s methods for estimating costs in its annual budget submission to DOJ: Met – BOP provided complete evidence that satisfies the entire criterion; Substantially Met – BOP provided evidence that satisfies a large portion of the criterion; Partially Met – BOP provided evidence that satisfies about half of the criterion; Minimally Met – BOP provided evidence that satisfies a small portion of the criterion; Not Met – BOP provided no evidence that satisfies any of the criterion. DOJ officials reported being satisfied with BOP’s cost estimation methods, noting that they could not identify any area needing improvement. Satisfied? The cost estimates should discuss any limitations in the analysis performed due to uncertainty surrounding data or assumptions. Further, the estimates’ derivation should provide for varying any major assumptions and recalculating outcomes based on sensitivity analyses, and their associated risks/uncertainty should be disclosed. Also, the estimates should be verified based on cross-checks using other estimating methods and by comparing the results with independent cost estimates. The cost estimates should include both government and contractor costs over the program’s full life cycle, from the inception of the program through design, development, deployment, and operation and maintenance to retirement. They should also provide an appropriate level of detail to ensure that cost elements are neither omitted nor double counted and include documentation of all cost-influencing ground rules and assumptions. Non-salary inmate medical care costs refer to the amount BOP spent on pharmaceuticals, medical supplies, and outside medical care (community hospital services and a portion of guard escort service and a portion of salaries (overtime). documentation and found budget officials had documented the formulas they used to calculate cost elements for new initiatives, such as activation-related costs for a new prison facility planned to open in the budget year. In some cases, however, we required the guidance of BOP budget analysts to identify backup support because the documentation was insufficient to allow someone unfamiliar with the budget to locate detailed corroborating data. For example, in reviewing BOP’s fiscal year 2008 cost estimate for a health service initiative related to expanding kidney dialysis, we required a budget official’s assistance in locating supporting formulas used to calculate the estimate. Best practices include providing enough detail so that the documentation serves as an audit trail to allow for clear tracking of cost estimates over time. Documenting all steps for developing its cost estimate would better position BOP to recreate its estimates in the event of attrition within its budget office among those who have developed initial cost estimates. BOP performed cross-checks by benchmarking new estimates against historical data, such as by estimating medical care costs based on cost obligations in recent years and developed numerous risk analyses and impact scenarios of funding cuts. Although not required to do so by OMB or DOJ annual budget development guidelines, BOP did not perform an uncertainty analysis consistent with best practices to quantify the risk associated with changes to various assumptions that drive its cost estimates.Major assumptions include the inmate population projection; inflation indices for medical care and utilities; and annual salary increases. Such an analysis would help provide DOJ, Congress, and other stakeholders with information to determine the probability that costs for key operations, such as inmate medical care and utilities, may exceed funding levels requested in the President’s budget. Consistent with best practices, BOP detailed pertinent costs related to its S&E and B&F accounts across sub-accounts. This level of detail helped ensure that no cost elements were omitted or double counted in its budget request submission to DOJ, and that BOP’s calculations and results substantially met characteristics for comprehensiveness. BOP relied on ground rules and assumptions, such as using inmate population projections to drive cost estimates for capacity needs and using historical obligation trends to estimate growth for utilities and inmate medical care costs. However, as noted earlier, BOP did not determine risk distributions for all assumptions, which would enable it to perform an uncertainty analysis for key cost elements. exceeded requested funding levels in the President’s budget in the last five fiscal years, and how has this affected BOP’s ability to manage its growing inmate population? Costs for key operations to maintain basic services, such as those for inmate medical care and utilities, have exceeded the funding levels requested in the President’s budget over the past five fiscal years, and this has limited BOP’s ability to manage its growing inmate population. From fiscal years 2004 through 2008, the funding levels requested in the President’s budget for BOP have been insufficient to cover annual cost growth for maintaining existing services, including inmate medical care and utilities. Moreover, population adjustment funding—necessary to cover expenses associated with housing a growing inmate population in BOP- operated facilities—has not consistently been included in the President’s budget, with BOP receiving no funding adjustments in some years. As a result, BOP has faced funding gaps in its operations account that has left it with limited flexibility to manage its continually growing inmate population. Medical care costs: From fiscal year 2004 through 2008, BOP’s annual non-salary inmate medical care costs increased by about $146.5 million. In contrast, during this period, the President’s budget requested funding cost adjustments for non-salary inmate medical care totaling about $15.4 million. In fiscal year 2008, non-salary inmate medical care and utilities costs were $430.5 million and $234 million, respectively. Utilities costs: From fiscal year 2004 through 2008, BOP’s annual utilities costs increased by a total of $87 million. In contrast, during this period, the President’s budget requested funding cost adjustments for utilities costs totaling about $31.6 million. Table 3 compares the rates of BOP’s average annual cost growth for non- salary inmate medical care and utilities to average rates of annual funding adjustments requested in the President’s budget, from fiscal year 2004 through 2008. Inmate medical care (non-salary) standard DOJ and OMB budget development guidance. For example, DOJ budget development guidance for fiscal years 2008 and 2009 instructed components to limit cost growth for current services to no more than 4 percent greater than prior year levels. DOJ reported that the 4 percent cap guidance is a general instruction given to all components but recognizes that BOP is different because its costs are less discretionary. Furthermore, DOJ reported that it did not automatically reject budget submissions from components that exceeded the cap, but instead required components to submit substantive information to justify need. to meet its operating costs. However, we found that BOP's cost estimation methods either met or substantially met GAO’s cost estimating best practices. Further, BOP officials report, and DOJ officials acknowledged, that BOP has already implemented significant reductions in programs by eliminating positions and centralizing administrative functions. In addition, the current level of overcrowding within BOP facilities presents an already serious safety challenge. Given BOP’s unique responsibility for managing this population, it has limited discretion when costs for key operations exceed funding levels. estimate, BOP would be better positioned to recreate its estimates in the event of attrition within its budget office among those who have developed initial cost estimates. annual budget formulation and justification process, and to provide DOJ with more detailed information to consider when deliberating its budget proposal for BOP, we recommend that the Attorney General of the United States take the following two actions: instruct the BOP Director to require the BOP budget staff to conduct an uncertainty analysis quantifying the extent to which operations costs could vary due to key cost assumptions changing and submit the results along with budget documentation to DOJ so that DOJ could be aware of the range of likely costs and associated confidence levels; and instruct the BOP Director to require the BOP budget staff to improve documentation of calculations used to estimate its costs. DOJ and BOP generally agreed with the findings. DOJ and BOP will formally review our recommendations when we submit our final product in fall 2009. We made several requests to meet with OMB, but we were unable to schedule a meeting during our review. In addition to the contact named above, Joy Gambino, Assistant Director, and Jay Berman, Analyst-in-Charge, managed this assignment. Pedro Almoguera, Tisha Derricotte, Geoffrey Hamilton, Marvin McGill, Karen Richey, Adam Vogt, and Melissa Wolf made key contributions to this report.
The Department of Justice's (DOJ) Federal Bureau of Prisons (BOP) is responsible for the custody and care of about 209,000 federal inmates--a population which has grown by 44 percent over the last decade. In fiscal years 2008 and 2009, the President requested additional funding for BOP because costs for key operations were at risk of exceeding appropriated funding levels. Government Accountability Office (GAO) was congressionally directed to examine (1) how BOP estimates costs when developing its annual budget request to DOJ; (2) the extent to which BOP's methods for estimating costs follow established best practices; and (3) the extent to which BOP's costs for key operations exceeded requested funding levels identified in the President's budget in recent years, and how this has affected BOP's ability to manage its growing inmate population. In conducting our work, GAO analyzed BOP budget documents, interviewed BOP and DOJ officials, and compared BOP's cost estimation documentation to criteria in GAO's Cost Estimating and Assessment Guide. BOP uses three general steps to estimate costs for its annual budget submission: (1) estimating cost increases to maintain service levels, such as inmate medical care and utilities; (2) projecting inmate population changes for the budget year and for several years into the future using a modeling program that incorporates data on the current inmate population and estimated incoming population and associated sentences; and (3) estimating costs to both provide additional capacity to house projected inmate population growth and implement new programs, such as activating new prisons. BOP's methods for cost estimation largely reflect best practices outlined in GAO's Cost Estimating and Assessment Guide. BOP followed a well-defined process for developing a mostly comprehensive, well documented, accurate, and credible cost estimate for fiscal year 2008. For example, BOP used relevant historical cost data and considered adjustments for general inflation when estimating costs for its budget request to DOJ. Moreover, BOP's methods for projecting inmate population changes were accurate, on average, to within 1 percent of the actual inmate population growth from fiscal year 1999 to August 2009. Still, BOP could strengthen its methods in two ways. First, BOP has not quantified the level of confidence associated with its cost estimate. While not required by the Office of Management and Budget or DOJ, conducting an uncertainty analysis of this kind is a best practice. By providing the results of such analysis to DOJ, BOP officials could share advance information on the probability and associated risks of operating expenses exceeding enacted funding levels. Second, during our review of documentation for BOP's fiscal year 2008 cost estimate, in some cases we required the guidance of BOP budget analysts to identify backup support because the documentation was insufficient to allow someone unfamiliar with the budget to locate detailed corroborating data. By documenting all steps, BOP would be better positioned to recreate its budget cost estimates in the event of attrition among those who initially developed them. According to BOP, from fiscal years 2004 through 2008, costs for non-salary inmate medical care and utilities exceeded funding levels in the President's budget request by about $131 million and $55 million, respectively. As a result, BOP has faced funding gaps in its operations account that has left it with limited flexibility to manage its continually growing inmate population.
Since its founding in 1718, the city of New Orleans and its surrounding areas have been subject to numerous floods from the Mississippi River and hurricanes. The greater New Orleans metropolitan area, composed of Orleans, Jefferson, St. Charles, St. Bernard, and St. Tammany parishes, sits in the tidal lowlands of Lake Pontchartrain and is bordered generally on its southern side by the Mississippi River. Lake Pontchartrain is a tidal basin about 640 square miles in area that connects with the Gulf of Mexico through Lake Borgne and the Mississippi Sound. While the area has historically experienced many river floods, a series of levees and other flood control structures built over the years were expected to greatly reduce that threat. The greatest natural threat posed to the New Orleans area continues to be from hurricane-induced storm surges, waves, and rainfalls. Several hurricanes have struck the area over the years including Hurricane Betsy in 1965, Hurricane Camille in 1969, and Hurricane Lili in 2002. The hurricane surge that can inundate coastal lowlands is the most destructive characteristic of hurricanes and accounts for most of the lives lost from hurricanes. Hurricane surge heights along the Gulf and Atlantic coasts can range up to 20 feet or more and there is growing concern that continuing losses of coastal wetlands and settlement of land in New Orleans has made the area more vulnerable to such storms. Because of such threats, a series of control structures, concrete floodwalls, and levees, was proposed for the area along Lake Pontchartrain in the 1960s. Congress first authorized construction of the Lake Pontchartrain and Vicinity, Louisiana Hurricane Protection Project in the Flood Control Act of 1965 to provide hurricane protection to areas around the lake in the parishes of Orleans, Jefferson, St. Bernard, and St. Charles. Although federally authorized, it was a joint federal, state, and local effort with the federal government paying 70 percent of the costs and the state and local interests paying 30 percent. The Corps was responsible for project design and construction and local interests were responsible for maintenance of levees and flood controls. The original project design, known as the barrier plan, included a series of levees along the lakefront, concrete floodwalls along the Inner Harbor Navigation Canal, and control structures, including barriers and flood control gates located at the Rigolets and Chef Menteur Pass areas. These structures were intended to prevent storm surges from entering Lake Pontchartrain and overflowing the levees along the lakefront. The original lakefront levees were planned to be from 9.3 feet to 13.5 feet high depending on the topography of the area directly in front of the levees. This project plan was selected over another alternative, known as the high-level plan, which excluded the barriers and flood control gates at the Rigolets and Chef Menteur Pass complexes and instead employed higher levees ranging from 16 feet to 18.5 feet high along the lakefront to prevent storm surges from inundating the protected areas. In the 1960s, the barrier plan was favored because it was believed to be less expensive and quicker to construct. As explained later in my statement, this decision was reversed in the mid-1980s. The cost estimate for the original project was $85 million (in 1961 dollars) and the estimated completion date was 1978. The original project designs were developed to combat a hurricane that might strike the coastal Louisiana region once in 200-300 years. The basis for this was the standard project hurricane developed by the Corps with the assistance of the United States Weather Bureau (now the National Weather Service). The model was intended to represent the most severe meteorological conditions considered reasonably characteristic for that region. The model projected a storm roughly equivalent to a fast-moving Category 3 hurricane. A Category 3 hurricane has winds of 111-130 miles per hour and can be expected to cause some structural damage from winds and flooding near the coast from the storm surge and inland from rains. Even before construction began on the project, it became evident that some changes to the project plan were needed. Based on updated Weather Bureau data on the severity of hurricanes, the Corps determined that the levees along the three main drainage canals, that drain water from New Orleans into Lake Pontchartrain, would need to be raised to protect against storm surges from the lake. The need for this additional work became apparent when Hurricane Betsy flooded portions of the city in September 1965. During the first 17 years of construction on the barrier plan, the Corps continued to face project delays and cost increases due to design changes caused by technical issues, environmental concerns, legal challenges, and local opposition to various aspects of the project. For example, foundation problems were encountered during construction of levees and floodwalls which increased construction time; delays were also encountered in obtaining rights-of-ways from local interests who did not agree with all portions of the plan. By 1981, cost estimates had grown to $757 million for the barrier plan, not including the cost of any needed work along the drainage canals, and project completion had slipped to 2008. At that time, about $171 million had been made available to the project and the project was considered about 50 percent complete, mostly for the lakefront levees which were at least partially constructed in all areas and capable of providing some flood protection although from a smaller hurricane than that envisioned in the plan. More importantly, during the 1970s, some features of the barrier plan were facing significant opposition from environmentalists and local groups who were concerned about environmental damages to the lake as well as inadequate protection from some aspects of the project. The threat of litigation by environmentalists delayed the project and local opposition to building the control complexes at Rigolets and Chef Menteur had the potential to seriously reduce the overall protection provided by the project. This opposition culminated in a December 1977 court decision that enjoined the Corps from constructing the barrier complexes, and certain other parts of the project until a revised environmental impact statement was prepared and accepted. After the court order, the Corps decided to change course and completed a project reevaluation report and prepared a draft revised Environmental Impact Statement in the mid-1980s that recommended abandoning the barrier plan and shifting to the high- level plan originally considered in the early 1960s. Local sponsors executed new agreements to assure their share of the non-federal contribution to the revised project. These changes are not believed to have had any role in the levee breaches recently experienced as the high-level design selected was expected to provide the same level of protection as the original barrier design. In fact, Corps staff believe that flooding would have been worse if the original proposed design had been built because the storm surge would likely have gone over the top of the barrier and floodgates, flooded Lake Pontchartain, and gone over the original lower levees planned for the lakefront area as part of the barrier plan. As of 2005, the project as constructed or being constructed included about 125 miles of levees and the following major features: New levee north of Highway U.S. 61 from the Bonnet Carré Spillway East Guide Levee to the Jefferson-St. Charles Parish boundary Floodwall along the Jefferson-St. Charles Parish boundary Enlarged levee along the Jefferson Parish lakefront Enlarged levee along the Orleans Parish lakefront Levees, floodwalls, and flood proofed bridges along the 17th Street, Orleans Avenue and London Avenue drainage canals Levees from the New Orleans lakefront to the Gulf Intracoastal Waterway Enlarged levees along the Gulf Intracoastal Waterway and the Mississippi New levee around the Chalmette area. The project also includes a mitigation dike on the west shore of Lake Pontchartrain. The current estimated cost of construction for the completed project is $738 million with the federal share being $528 million and the local share $210 million. The estimated completion date as of May 2005 for the whole project was 2015. The project was estimated to be from 60-90 percent complete in different areas. The work in Orleans Parish was estimated to be 90 percent complete with some work remaining for bridge replacement along the Orleans Avenue and London Avenue drainage canals. The floodwalls along the canals, where the recent breaches occurred, were complete. Jefferson Parish work was estimated to be 70 percent complete with work continuing on flood proofing the Hammond Highway bridge over 17th Street and two lakefront levee enlargements. Estimated completion for that work was 2010. In the Chalmette area work was estimated to be 90 percent complete with some levee enlargement work and floodwall work remaining. In St. Charles Parish work was 60 percent complete with some gaps still remaining in the levees. Closure of these gaps was scheduled by September 2005. Federal allocations for the project totaled $458 million as of the enactment of the fiscal year 2005 federal appropriation. This represents 87 percent of the Federal government’s responsibility of $528 million with about $70 million remaining to complete the project in 2015. Over the last 10 fiscal years (1996-2005), federal appropriations have totaled about $128.6 million and Corps reprogramming actions resulted in another $13 million being made available to the project. During that time, appropriations have generally declined from about $15-20 million annually in the earlier years to about $5-7 million in the last three fiscal years. While this may not be unusual given the state of completion of the project, the Corps’ project fact sheet from May 2005 noted that the President’s Budget request for fiscal years 2005 and 2006 and the appropriated amount for fiscal year 2005 were insufficient to fund new construction contracts. Among the construction efforts that could not be funded, according to the Corps, were the following: Levee enlargements in all four parishes Pumping station flood protection in Orleans Parish Floodgates and a floodwall in St. Charles Parish Bridge replacement in Orleans Parish. The Corps had also stated that it could spend $20 million in fiscal year 2006 on the project if the funds were available. The Corps noted that several levees had settled and needed to be raised to provide the design- level of protection. For the last few years, the project generally received the amount of funds appropriated to it and was not adversely affected by any Corps reprogramming actions. In recent years, questions have been raised about the ability of the project to withstand larger hurricanes than it was designed for, such as a Category 4 or 5, or even a slow-moving Category 3 hurricane that lingered over the area and produced higher levels of rainfall. Along this line, the Corps completed in 2002 a reconnaissance or pre-feasibility study on whether to strengthen hurricane protection along the Louisiana coast. A full feasibility study was estimated to take at least five years to complete and cost about $8 million. In March 2005, the Corps reported that it was allocating $79,000 to complete a management plan for the feasibility study and a cost-share agreement with local sponsors. The President’s fiscal year 2006 budget request did not include any funds for the feasibility project. In closing, the Lake Pontchartrain hurricane project has been under construction for nearly 40 years, much longer than originally envisioned and at much greater cost, although much of that can be attributed to inflation over these years, and the project is still not complete. Whether the state of completion of the project played a role in the flooding of New Orleans in the wake of Hurricane Katrina in August 2005 is still to be determined as are issues related to whether a project designed to protect against Category 4 or 5 hurricanes would or could have prevented this catastrophe. Mr. Chairman, this concludes my prepared testimony. We would be happy to respond to any questions that you or Members of the Subcommittee may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The greatest natural threat posed to the New Orleans area is from hurricane-induced storm surges, waves, and rainfalls. A hurricane surge that can inundate coastal lowlands is the most destructive characteristic of hurricanes and accounts for most of the lives lost from hurricanes. Hurricane surge heights along the Gulf and Atlantic coasts can exceed 20 feet. The effects of Hurricane Katrina flooded a large part of New Orleans and breached the levees that are part of the U.S. Army Corps of Engineers (Corps) Lake Pontchartrain, and Vicinity, Louisiana Hurricane Protection Project. This project, first authorized in 1965, was designed to protect the lowlands in the Lake Pontchartrain tidal basin from flooding by hurricane-induced sea surges and rainfall. GAO was asked to provide information on (1) the purpose and history of the Lake Pontchartrain, and Vicinity, Louisiana Hurricane Protection Project and (2) funding of the project. GAO is not making any recommendations in this testimony. Congress first authorized the Lake Pontchartrain and Vicinity, Louisiana Hurricane Protection Project in the Flood Control Act of 1965. The project was to construct a series of control structures, concrete floodwalls, and levees to provide hurricane protection to areas around Lake Pontchartrain. The project, when designed, was expected to take about 13 years to complete and cost about $85 million. Although federally authorized, it was a joint federal, state, and local effort. The original project designs were developed based on the equivalent of what is now called a fast-moving Category 3 hurricane that might strike the coastal Louisiana region once in 200-300 years. As GAO reported in 1976 and 1982, since the beginning of the project, the Corps has encountered project delays and cost increases due to design changes caused by technical issues, environmental concerns, legal challenges, and local opposition to portions of the project. As a result, in 1982, project costs had grown to $757 million and the expected completion date had slipped to 2008. None of the changes made to the project, however, are believed to have had any role in the levee breaches recently experienced as the alternative design selected was expected to provide the same level of protection. In fact, Corps officials believe that flooding would have been worse if the original proposed design had been built. When Hurricane Katrina struck, the project, including about 125 miles of levees, was estimated to be from 60-90 percent complete in different areas with an estimated completion date for the whole project of 2015. The floodwalls along the drainage canals that were breached were complete when the hurricane hit. The current estimated cost of construction for the completed project is $738 million with the federal share being $528 million and the local share $210 million. Federal allocations for the project were $458 million as of the enactment of the fiscal year 2005 federal appropriation. This represents 87 percent of the federal government's responsibility of $528 million with about $70 million remaining to complete the project. Over the last 10 fiscal years (1996-2005), federal appropriations have totaled about $128.6 million and Corps reprogramming actions resulted in another $13 million being made available to the project. During that time, appropriations have generally declined from about $15-20 million annually in the earlier years to about $5-7 million in the last three fiscal years. While this may not be unusual given the state of completion of the project, the Corps' project fact sheet from May 2005 noted that the President's budget request for fiscal years 2005 and 2006, and the appropriated amount for fiscal year 2005 were insufficient to fund new construction contracts. The Corps had also stated that it could spend $20 million in fiscal year 2006 on the project if the funds were available. The Corps noted that several levees had settled and needed to be raised to provide the level of protection intended by the design.
EMS systems are designed to provide a quick, coordinated response of emergency medical care resources for traumatic incidents and medical emergencies. Persons who need such a response may need help for a variety of medical conditions, such as cardiac arrest, diabetes, seizures, or behavioral disorders, or they may have injuries such as burns, wounds, or severe head or spinal damage. The major components of an emergency medical system often include the following: A public access system. This is generally a 911 emergency telephone line used to contact and dispatch emergency medical personnel. Emergency medical response. The goal for the initial response is to have medically trained personnel available to the patient as quickly as possible and to provide early stabilizing care. The level of care provided can be either basic life support or advanced life support. Because most EMS agencies operate independently of other medical facilities and have relatively few physicians among their providers, the ability of field personnel to talk with a physician is important in ensuring appropriate medical care. Such a link to “medical oversight” ensures that field personnel at the scene or during transport have immediately available expert direction that can authorize and guide the care of their patients. Emergency medical transport or transfer. This involves getting the patient to a hospital or other medical facility. Although an important component of the system, emergency transport does not apply in all cases. Officials responding to a recent survey of urban EMS systems indicated, for example, that an average of 37 percent of emergency requests do not result in emergency transport. EMS systems are typically managed and operated by local communities and jurisdictions, such as counties or fire districts. Entities involved in providing EMS for a particular community may include fire departments with paid or volunteer personnel trained in both fire suppression and EMS or EMS alone, for-profit or not-for-profit ambulance companies, volunteer ambulance services or rescue squads, hospitals, and government-based EMS organizations. The extent of involvement of each type of entity in local EMS systems nationwide is not fully known. While some systems provide both emergency response and emergency transportation within the same agency or organization, others may use multiple organizations. For example, a fire department may provide the first emergency response while a private ambulance company provides most emergency transport. Varied sources of EMS funding also exist, such as local taxes, billing for services provided, private-sector donations, subscription services, and government grants. At the state level, EMS agencies generally do not provide direct services but rather regulate and oversee local and regional EMS systems and EMS personnel. In most states, state laws and regulations govern the scope, authority, and operations of local EMS systems. While the state’s authority and role varies from state to state, the agencies typically license and certify EMS personnel and ambulance providers and establish testing and training requirements. Some establish standard protocols for treatment, triage, and transfer of patients. State EMS agencies may also be responsible for approving statewide EMS plans, allocating federal EMS resources, and monitoring performance. At the local level, the needs reported by EMS systems are wide-ranging and diverse, reflecting the different environments in which they operate. However, the available data allow a better understanding about the kinds of problems reported than about their effects. At the state level, the reported needs centered on the lack of information and systems for evaluating the performance of EMS systems and deciding how best to make improvements. At the local level, the challenges faced by individual systems are often associated with variations in such factors as the characteristics of the population served and the geography of the area. The area served by an EMS system can range from isolated rural settings in mountainous terrain to sprawling and densely populated urban settings with high-rise buildings and traffic gridlock. Such differences tend to be reflected in certain aspects of the EMS system itself. For example, according to officials, rural areas are less likely than urban areas to have 911 emergency dialing (requiring callers to use a 7- or 10-digit number instead), and their communication between dispatchers or medical facilities and emergency vehicles are more likely to suffer from “dead spots”—areas where messages cannot be heard. Rural areas are also more likely to rely on volunteers rather than paid staff, and these volunteers may have fewer opportunities to maintain skills or upgrade their skills with training. These differing characteristics affect what officials perceive and report as key needs. For example, officials from national associations representing EMS physicians have indicated that long distances and potentially harsh weather conditions in rural areas can accelerate vehicle wear and put vehicles out of service more often. By contrast, an urban area may be less concerned with vehicle wear and more concerned with traffic problems. A 1994 study, for example, compared New York City’s EMS response time for cardiac arrest patients with response times reported from other locations. In New York City, the time interval from patient collapse to arrival of EMS personnel at the patient’s side was about 11.4 minutes, nearly half of which (5.5 minutes) was spent negotiating city traffic. This interval was similar to ambulance driving time reported in another large city, Chicago, but was significantly longer than the 3.3 minutes of driving time required in a suburban county in the state of Washington. The variety of EMS needs can be seen in the various categories of needs reported by EMS officials. Far-reaching needs were identified in a March 2000 national survey on rural EMS needs, from our own fieldwork involving urban and rural EMS systems, from our review of the professional literature, and in our conversations with EMS experts. Recruitment and retention of EMS personnel. In rural systems, personnel needs reflected these systems’ heavy dependence on volunteers. Rural systems reported that it was getting more difficult to recruit volunteers, especially for daytime shifts, and that inadequate staffing was a major problem affecting the ability to quickly respond to emergencies. For example, one predominantly volunteer EMS squad reported having difficulty responding to early-morning calls because most of its volunteers also had full-time jobs. Officials reported that in the past year, the service had been unable to immediately respond to two early-morning calls involving critically ill patients. Rural EMS systems also report encountering problems with staff attrition due to increased demand on personal time for training and calls, stress from treating relatives and neighbors, and poor working conditions. For example, in one instance, closure of a local hospital increased demands on staff by doubling the amount of time personnel had to spend transporting patients. In another example, a state reported concerns about the ability to retain volunteer staff because they had to use antiquated and unreliable equipment, such as ambulances that frequently stranded them in remote areas or that had unreliable lighting, requiring them to provide care by flashlight. In urban systems, where there is less reliance on volunteers, experts report that job stresses may involve very different concerns, such as a higher possibility of encounters with violent situations. Training and Education. Rural systems reported training and education needs that focus on retention of infrequently used medical skills, as well as training in management, budgeting, personnel, and organizational issues. EMS officials said that in rural areas, the sparsity of staff and distances were major impediments to providing in-person training. One local system reported that some personnel certified to provide advanced care had never performed certain advanced procedures, such as airway intubation. This system is currently trying to partner with a local hospital to provide the necessary clinical experience. By contrast, some urban systems we consulted reported needing specially trained staff to respond to patients with mental disorders and personnel trained in different languages so they could better communicate with the diverse populations they serve. Equipment. In the March 2000 survey, a wide range of equipment needs was reported for rural systems, including communication equipment (73 percent of respondents), medical equipment (68 percent of respondents), ambulances (54 percent of respondents), and buildings (34 percent of respondents). For example, one survey respondent cited a rural county that had one operational ambulance for 6,500 residents (the state average was 1 per 4,600 residents) and only three hand-held portable radios were available for the six medical personnel on call. Asked to estimate the costs of addressing the capital needs for rural EMS systems in their states, only 28 of the 41 state EMS directors responding to the survey said they had enough information to provide an estimate. The average state cost, based on the figures from 27 of these states, was $12.2 million. For urban systems, no similar survey or set of estimates is available. Officials we spoke with indicated that urban systems also face equipment needs. Financing. Both urban and rural systems reported examples of tenuous financing. In rural areas, officials reported that it is difficult to fully support the high fixed cost of operating around-the-clock EMS services because the number of calls is generally smaller in sparsely populated areas, limiting the opportunities to bill for services. This difficulty has resulted in some communities going without local EMS coverage. For example, one county reported going without the services of a dedicated EMS provider for the past several years and instead relied on ambulance response from other communities that may be located as far as 20 miles away. According to officials, this county—with a population less than 3,000, no industry, and a relatively small number of businesses—has an insufficient tax base to support such services. Other states have reported increased response times in their rural areas due to lack of funds to maintain greater capacity. Urban systems reported financing problems caused by a growing demand for services combined with tight community budgets. Officials of systems that relied heavily on local government funds and levies to support their operations said they were considering billing health insurers to supplement the income of their EMS services. At the same time, some systems that were relying on income from billing health insurers reported concerns about declining reimbursement levels from these sources due to possible changes in reimbursement rules. Medical oversight. Both rural and urban EMS officials we spoke with expressed a need for improved medical oversight, but this need took different forms. Officials from two urban systems pointed to the need to centralize and standardize medical direction. One official said his system was trying to provide consistent medical direction to EMS providers in the field by centralizing the medical direction in one location, rather than having it provided by six different hospitals. Systems in other locations may face different challenges. For example, a rural state reported that in most communities, physicians providing medical direction were as far as 100 miles away. In addition, they were not always available. While surveys and assessments give some indication of EMS needs, the full picture remains incomplete. For instance, a survey on urban EMS needs has not been conducted. In addition, the extent and impact of these reported needs and problems in particular locations, relative to other local and state systems, is unknown because systems are localized and thus there is little standard and quantifiable information that can be used to compare systems. The Institute of Medicine has noted that without reliable information, it is hard for emergency care providers, administrators, and policymakers to determine in a systematic way (1) the extent to which systems are providing appropriate, timely care or (2) what they ought to do to improve performance and patient outcomes. At the state level, reported needs tend to revolve around basic components for coordinating EMS programs, such as information about the activities of local EMS systems and methods to evaluate the care being provided. These reported needs come mainly from state-level assessments conducted by NHTSA. This agency has a program that allows states to request federal assistance in assessing the effectiveness of their EMS systems. In this process, NHTSA assembles a team that evaluates states— based on in-depth briefings from, for example, state EMS officials, public and private sector partners, and members of the medical community—on 10 standard components such as medical direction, human resources, training, and evaluation systems. A 1999 compilation summarizing the findings of a decade of NHTSA assessments in 46 states showed that most states were missing important management components. For example, at the time of assessment none of the 46 states had established EMS performance standards (such as the percentage of response times that should fall within an established time frame), 91 percent did not have a functional system for collecting and analyzing data from EMS providers, and 89 percent did not have a statewide system to evaluate patient care. Table 1 documents 10 areas identified by the assessments that were in need of greatest improvement. All of these areas were cited then as a need in at least 80 percent of the 46 states evaluated. These assessments are subject to some limitations in that time has elapsed since they were conducted, they reflect the views of many different assessment teams, and there are no data showing the negative effects that resulted from the reported deficiencies. There are indications that some improvement has occurred—but also that many problems continue. For example, a preliminary update conducted by NHTSA in 2001 found that because enough states had implemented a statewide quality assurance program and a state EMS plan, the percentage of states still in need of improvement in these areas was less than 50 percent. However, a NHTSA official provided information that showed that most states still have significant needs in most of these areas. For areas of improvement other than the quality assurance programs and state EMS plans, the preliminary assessment found that 50 percent or more of states remained in need of improvement. While no single federal agency has lead responsibility for EMS activities, four federal agencies help support and promote EMS improvements, acting primarily as facilitators through activities such as technical assistance. In 1995, two of these agencies facilitated an effort to gain EMS stakeholder consensus on a comprehensive national strategy to improve EMS, called the “EMS Agenda for the Future.” While progress in implementing the Agenda has been made, federal EMS officials told us that a 1999 effort to revisit the Agenda goals and set major priorities for achieving them highlighted a need for improved EMS information and information systems. While this need had been a longstanding issue for years within the EMS community, officials told us that the process of setting priorities resulted in a growing focus on this gap. This information gap was further highlighted when HCFA changed the manner in which it reimbursed EMS providers for ambulance services. Federal officials said progress in implementing the Agenda has been affected by the lack of consistent information about EMS systems, and as part of their attempts to act as facilitators, they have all attempted to collect EMS data or promote consistency in the data. Several local agencies we contacted also reported needing improved EMS data and information to monitor and improve performance, but they recognized that data collection and reporting is sometimes a low priority and an administrative burden in the face of competing demands on EMS providers’ time. Federal agencies, in different ways, are working to collect and promote improvement of EMS data with available resources. Four different federal agencies are involved in supporting and promoting EMS improvements. None imposes standards or enforces requirements on how EMS systems should operate. Instead, the agencies undertake activities such as providing technical support and guidance, providing funding for EMS initiatives through various grant programs to states, and exploring avenues for developing a consensus among EMS providers on policy needs and changes. The agencies and their major activities are as follows: National Highway and Traffic Safety Administration. NHTSA’s EMS division, with a budget of $1.4 million in fiscal year 2000, has several activities that support the development and improvement of EMS care. A core goal is to enhance the quality of EMS services, in part by developing national curricula for training and certifying EMS responders. Other activities include providing technical guidance to state EMS agencies through such venues as seminars on designing and implementing information systems and state assessments to identify system development needs and strategies; conducting training for medical directors and administrators of EMS systems; publishing educational and instructional materials on how to improve EMS; and funding research and demonstration projects to promote EMS improvement. According to NHTSA officials, the EMS division became involved in standardizing emergency medical services in the 1960s after recognition at the federal level of a need to improve and monitor the quality of EMS. NHTSA also provides grants to states and territories for highway traffic safety. In fiscal year 2000, about $4.9 million of this money was used for EMS improvements. Health Resources and Services Administration. Two components of HRSA are involved in EMS: the Maternal and Child Health Bureau’s EMS for Children program and the Office of Rural Health Policy. The EMS for Children program provides strategic planning to enhance the pediatric capabilities of EMS systems, provides financial support to NHTSA for EMS projects and conferences, and funds resource centers that provide technical assistance to state EMS agencies. In fiscal year 2000, the EMS for Children program provided approximately $9.8 million to states in the form of grants. The Office of Rural Health Policy sponsored grants to states to strengthen rural health and grants to rural health providers to expand access, coordinate services, control the costs of care, and improve the quality of essential health care services. Each of these grant types can be used to support emergency services. HRSA officials estimate that states and providers received $4.2 million in fiscal year 2000 to promote the development of EMS systems in rural areas. For example, one project established a partnership between a trauma foundation, a university telecommunication center, and the state department of health to provide distance learning opportunities for rural EMS providers, helping them obtain new knowledge, skills, and clinical competency. HRSA is also a leading and coordinating agency for national objectives related to access to quality health services, including emergency services, developed in the Healthy People 2010 initiative for improving the nation’s health. One such objective is to increase the proportion of people who can be reached by EMS rapidly, in particular the proportion who can be reached by EMS within 5 minutes in urban areas and within 10 minutes in rural areas. Centers for Disease Control and Prevention. CDC administers the Preventive Health and Health Services Block Grant program that provides funds to states for preventive health programs and projects, including projects to plan, establish, expand, or improve EMS systems. In fiscal year 2000, 20 states elected to use $11.1 million from their allocated grants to fund EMS activities. CDC is also a leading agency for HHS’ Healthy People 2010 objectives related to heart disease and EMS, such as increasing the proportion of adults who are aware of the early warning signs of a heart attack and the importance of accessing emergency care by calling 911. U.S. Fire Administration. USFA supports EMS systems operated by fire departments. Approximately 80 percent of fire departments in the United States provide some EMS services. USFA publishes guidance for EMS administrators and provides training for managers and personnel through the agency’s National Fire Academy. This agency also maintains a voluntary database that captures fire and some EMS information, such as amount of time spent at the emergency scene, and information about the types of medical conditions seen and the procedures performed. Beginning in fiscal year 2001, USFA administers a grant program for fire departments, which could include some funding for EMS. Federal funding through these four agencies for local and state EMS needs totaled about $30 million in fiscal year 2000. However, half of these funds are subject to federal restrictions that limit the amount that can be spent on equipment or other capital needs. Many states use federal grant moneys to fund their basic regulatory functions. For example, several states used Preventive Health and Health Services block grants from CDC to pay for improvements to basic state administrative processes, such as licensing, certifying, and inspecting ambulance operators and EMS personnel. As part of their work as facilitators, federal agencies have assumed a significant role in identifying and highlighting strategies for improving EMS systems. A major effort in this regard occurred in 1995, when NHTSA and HRSA facilitated a multi-disciplinary group to create an overall strategic plan for improving EMS systems. This group comprised more than 100 EMS stakeholders, including representatives of federal agencies, 19 national organizations, and state and local EMS providers. The resulting strategic plan, known as the EMS Agenda for the Future, identified 14 areas requiring continued development for EMS systems to be maximally effective. These areas encompass such matters as the need for continuous and comprehensive EMS program evaluation, communication systems that result in the most effective course of action, qualified medical direction for all EMS providers and activities, a prepared work force, and a finance system that supports EMS systems so they are prepared to meet the demands placed on them. In 1999, NHTSA and HRSA issued a second key document after reconvening EMS local, state, and national agencies and stakeholders to develop a list of priorities for implementing the Agenda, which was published in 1996. This document, the EMS Agenda for the Future: Implementation Guide, identified over 90 objectives for implementing the Agenda’s goals. Ten of these objectives, shown in table 2, were highlighted as priorities because, among other things, they addressed major pressing problems and had the potential to improve EMS systems and patient outcomes. Officials at NHTSA and HRSA told us that some progress in these areas has been achieved. For example, federal agencies had convened a workgroup to develop an EMS research agenda and worked with the American College of Emergency Physicians and the National Association of EMS Physicians on a 2-year process to develop a new set of guidelines on medical direction. These agencies also had other activities designed to identify and address EMS needs for specific concerns. For example, HRSA and NHTSA have also joined with EMS experts to develop a 5-year strategic plan to address the many gaps in emergency services available to children, most recently to cover 2001 through 2005. This national blueprint serves as a road map for many states and organizations and addresses issues parallel to those identified in the Agenda such as need for including a pediatric component in the development of EMS information systems. Another area in which federal agencies have acted as facilitators has been in developing a framework for promoting EMS information systems. In 1993, HHS, NHTSA, and USFA sponsored a comprehensive project to address the need for more consistently collected EMS data. This effort produced a model set of EMS data elements and definitions that states and local systems could use as the basis for creating their own information systems. Data elements—including the location of the medical emergency, the patient’s vital signs, treatments provided, and information on EMS response times—were selected based on their usefulness for several purposes, including documenting the medical care provided; billing for services; evaluating, monitoring, and improving the delivery of EMS care; operating EMS systems; and allocating resources locally. Gaining consensus on what these data elements should be has not translated into substantial progress in putting them in place. Federal officials told us that gaps in EMS data has been a longstanding concern and problem area that emerged as major priority when objectives for implementing the Agenda for the Future were discussed in 1999. In part, gaps in data grew as a focus of concern because it is an underpinning to other Agenda for the Future goals, such as determining the costs and benefits of EMS to the community and improving research on EMS. The need for more and better data on EMS services was also highlighted, they said, in HCFA’s development of a new Medicare fee schedule for ambulance services in 1999 and 2000. During this process, HCFA had difficulties determining how to target payments so that EMS providers serving isolated areas could be appropriately reimbursed. In part because of the limited data available on rural ambulance services, such as the number of ambulance trips made, the agency had difficulty developing a payment adjuster for ambulance providers that serve isolated areas. Such an adjuster was needed to reflect potential differences in the volume of services and unit service costs. Our work looking at this process also found problems with the adequacy of data reported on ambulance claims. Claims for reimbursement were being denied at varying rates across payers because providers were not completing forms correctly and because of gaps in information on the beneficiaries’ health conditions linked to the appropriate level of EMS service. Along with their federal counterparts, state, and local EMS officials we contacted reiterated an interest and need for improved EMS data collection. They said better, more consistent information was needed for such purposes as the following: Improving EMS performance at the local level. Local EMS agencies and providers often lack data to justify budget requests, answer questions about patient outcomes, or support ongoing quality improvement and surveillance. All nine local and six state systems we consulted indicated that information and information systems were needed to monitor performance and to justify and quantify needs at the local level for the public and for decisionmakers. At the state level—where resource allocation decisions are often made—officials reiterated the need for basic EMS data collected statewide to help them determine how to set priorities for allocating scarce resources. For example, one state is trying to identify different funding scenarios and sources to reinvigorate its EMS agencies. In doing so, the state is using data to quantify equipment needs to more accurately estimate potential costs. Setting and monitoring national policy. In addition to data needs for determining a Medicare ambulance fee schedule, the absence of national EMS data is considered a major impediment to monitoring national health priorities. Two goals under the national Healthy People 2010 initiative involve improving response times and access to EMS services. However, HHS officials told us that sources have not been identified or developed to provide data for measuring the status and progress towards achieving these goals. Lack of uniform definitions for data elements across data sources compounds the difficulty of monitoring these goals. For example, while many systems collect data on their response times, they often collect data differently or use different definitions, making comparisons between systems impossible. A survey of EMS systems conducted in 2000 involving the largest 200 cities across the country found that 45 percent of the cities started the response-time clock when the EMS vehicle was dispatched to the scene, while about one-third started the clock when the 911 call for help was received. In addition, researchers found that the systems defined “dispatch” differently. Improving researchers’ ability to assess EMS outcomes. Officials from state and local EMS systems told us that the best-documented example of EMS treatments affecting outcomes is for cardiac arrests, in which the expediency of treatment is critical to the survival of the victim. Research has documented the wide variation of cardiac arrest survival rates across locations, but determining the reasons for these variations is hampered because of inconsistent collection methods for EMS data on response times, treatments, and other variables. For example, 1990 research on the survival rates (discharged alive from the hospital) for out- of-hospital cardiac arrest showed rates ranging from 2 percent to 25 percent in 29 separate EMS service areas. The researchers, however, were unable to determine whether these differences were actual differences in outcomes or the result of inconsistencies in data collection. In addition to the 1993 effort to gain consensus on EMS data elements, federal agencies, in their role as facilitators, have in different ways acted to promote the collection of uniform EMS data. For example, since 1995 HRSA’s EMS for Children program has promoted EMS data collection by funding a data analysis resource center. Staffed with three full-time employees, the center provides technical assistance to states on EMS data collection and systems development. Also, USFA expanded its voluntary National Fire Incident Reporting database in 1999 to include the full range of fire department activities, including EMS. Despite these efforts, a survey performed in 2000 indicates that few states are currently able to collect statewide data uniformly and consistently. Recognizing the increasing need for such data, the National Association of State EMS Directors, with support from HRSA, conducted this survey to assess the collection of information at the state and local levels. State EMS directors were asked whether they collected EMS data statewide and whether their systems collected data in line with the model data set definitions. Eighteen of the 46 states responding did not collect any data statewide. Of the 28 states that collected some EMS data at the state level, 18 said their data were compliant with this uniform data set, but 9 of those 18 states reported that they had not received information from all EMS systems in the state. According to state EMS officials, data improvement efforts are limited because in the face of constrained resources and competing demands for staff time, local systems have little incentive to collect and report electronic data or to adopt a uniform data format that may differ from their own. EMS officials told us that it is very challenging for state agencies to convince local EMS providers, particularly volunteer agencies, to contribute to the state EMS data pool. Officials said that an important component for improving data collection is for local providers to see value in the data they are collecting for improving their services. Officials told us that creating information systems that allow providers to access the data would help providers to see this value, and will be important to enhancing the ability to collect data and to aggregate it at a national level. Surveys and assessments of EMS systems have identified broad categories of limitations and needs, showing that basic issues in such areas as staffing, training and equipment, and financing are considered to be day- to-day challenges of local EMS systems and state efforts to coordinate these systems. Determining the magnitude of these problems and how to resolve them, however, is itself a challenge because of the lack of information on which to base an understanding of how these systems perform. Federal agencies have played a significant role in gaining consensus on the long-term national strategic goals and priorities for EMS. With available resources, they are attempting to develop strategies for addressing information needs. Progress in this area, however, is likely to remain slow because EMS systems and providers have many competing demands and few incentives to devote limited resources to data collection efforts. We provided a copy of the draft report to HHS, the Federal Emergency Management Agency, and the Department of Transportation for review and comment. In its written comments, HHS stated that the report accurately reflected its programs and activities. (See appendix II). Similarly, in oral comments, the agency liaison at the Federal Emergency Management Agency told us that the report accurately reflected the agency’s programs and activities. The Department of Transportation said it had no comments. In its comments, HHS also stressed that, given the terrorist attacks of September 11, the key themes and findings of the report were even more relevant. We agree that EMS systems are a critical part of the public health safety net, both in responding to day-to-day emergencies of citizens and in responding to disasters. We have modified our report to clarify that our scope was to capture information on the stated needs of EMS systems apart from issues related to disaster preparedness. HHS also expressed that its Emergency Medical Services for Children, 5-year strategic plans should be mentioned in the report. We believe the EMS consensus plan supported by HHS, NHTSA and others—the EMS Agenda for the Future— better represents the EMS needs for the general population, but we have added information about HHS’ latest strategic plan for children. HHS also provided technical or clarifying comments related to its grant programs and other areas, which we incorporated as appropriate. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the Secretary of Health and Human Services, the Director of the Federal Emergency Management Agency, the Secretary of Transportation, appropriate congressional committees, and other interested parties. If you or your staff have any questions about this report, please contact me at (202) 512-7119 or Katherine Iritani at (206) 287-4820. Other major contributors to this report were Tim Bushfield, Leslie Spangler, and Stan Stenersen. In conducting our work, we consulted officials from national and state organizations and other experts to obtain their views on EMS systems and care. We also consulted officials from six state EMS agencies and nine local EMS systems to obtain more detailed information. We selected these agencies to obtain information from EMS systems with differing system characteristics such as population (rural/urban), level of EMS service (state/county/local), type of staffing (paid/volunteer), and service organization (fire department/private ambulance services/contracted).
Local emergency medical systems (EMS) have reported substantial needs in such areas as personnel, training, equipment, and the availability of doctors to advise emergency personnel in the field. Federal agencies have supported EMS improvements by acting as facilitators rather then by establishing requirements or providing significant funding. The agencies provide technical assistance, set voluntary standards for licensing EMS providers, and administer limited grant funding. The four federal agencies GAO studied have separately begun to collect EMS data or promote data consistency. However, progress in developing this information has been slow. State and local EMS officials attributed the lack of progress to the many competing demands on their time and said that EMS providers and local systems have few incentives to collect and report EMS information.
The Clean Air Act, as amended, provides the basic statutory framework for the role of the federal government and the states in managing air quality in the United States. Among other things, the Act authorizes EPA to set and enforce standards, referred to as National Ambient Air Quality Standards (NAAQS), for pollutants. EPA has subsequently set standards for six pollutants—ozone, particulate matter, carbon monoxide, nitrogen dioxide, sulfur dioxide, and lead. While carbon monoxide is directly emitted when various fuels are burned, ground-level ozone is formed when VOCs and NOx mix in the presence of heat and sunlight. As a result, emissions of VOCs and NOx are considered by EPA and the states in their efforts to reduce concentrations of ground-level ozone. Because heat and sunlight act as catalysts in the formation of ground-level ozone, high ozone levels are most prevalent in spring and summer. EPA sets and enforces the NAAQS to, among other things, reduce the negative health effects of air pollution. Each of the six pollutants covered by the NAAQS is known to cause a variety of adverse health and other consequences. For example, at certain concentrations ground-level ozone and carbon monoxide can, among other things, cause lung damage, eye irritation, asthma attacks, chest pain, nausea, headaches, and premature death. To enforce the standards, EPA evaluates monitoring data on air quality to determine whether local air quality meets federal standards—designating areas as in either attainment (if they meet the federal standards) or nonattainment (if they do not meet the federal standards) with each of the NAAQS. Under the Act, states that contain areas in nonattainment with the NAAQS are required to identify how they will reduce emissions and improve air quality to meet them. For each pollutant, states are required to prepare a state implementation plan (SIP) and have the plan approved by EPA. States have choices in determining how to reduce emissions and meet air quality standards, determining, among other things, how much to reduce emissions from mobile sources such as automobiles compared with other sources of similar emissions such as power plants. Because use of gasoline in automobiles emits several chemicals, including carbon monoxide, nitrogen oxides, and VOCs, and because emissions from automobiles are often an important contributor to local air quality problems, the federal government and the states often focus on reducing automobile emissions. Whatever the planned reductions, states must identify an inventory of air emissions and demonstrate in their SIPs how they will achieve attainment in a specific time frame. States typically demonstrate this through modeling analysis that estimates how the various efforts in their SIPs will reduce emissions and improve air quality. The Act also provides authority to set standards and establish requirements for some programs specifically designed to reduce vehicle emissions. For example, using authority provided under the Act, EPA has required newer cars to meet more stringent emissions standards, and vehicle manufacturers have incorporated emissions-control devices such as catalytic converters and oxygen sensors to meet them. Further, the Act requires cars to have under-the-hood systems and dashboard warning lights that check whether emissions control devices are working properly. In addition, the Act requires that some areas—generally highly populated metropolitan areas—have programs for periodic inspection and maintenance of vehicles. These programs identify high-emitting vehicles, which sometimes have malfunctioning emissions control devices, and require vehicle owners to make repairs before the vehicles can be registered. The Act gives the federal government, through the EPA, primary authority for regulating the environmental impacts of gasoline use. For example, the Act sets minimum national standards for conventional gasoline, as well as requiring that certain gasoline blends formulated to reduce emissions be used in some areas with especially poor air quality. Specifically, for certain areas with long-standing and especially poor air quality, the federal government requires the use of special reformulated gasoline, commonly referred to as RFG. The amendments also require other areas to use special gasoline blends designed to reduce summertime ozone pollution and wintertime carbon monoxide pollution. The Act allows states or regions not required to use RFG to seek EPA approval to require use of other special gasoline blends to aid in improving air quality, provided that they do not violate minimum federal standards. In 2001, EPA studied the proliferation of gasoline blends and reported that several states had chosen special blends other than RFG for one or more of three reasons: (1) the states were not eligible to require RFG because their air quality was not bad enough, (2) the states wanted to avoid the RFG requirement to use an oxygenate and its added cost, (3) fuel suppliers and states believed that the other special blend would be less costly than RFG while meeting their need to reduce emissions. States seeking to use a special gasoline blend must obtain formal approval from EPA, generally the regional office with authority to review their SIPs. Specifically, under the Clean Air Act, section 211(c)(4)(C), EPA may approve applications by states to use special gasoline blends if the states demonstrate that the fuel is needed to reach attainment with federal air quality standards. In guidance issued in August 1997—after several of the special gasoline blends were approved—EPA clarified that they can approve a state gasoline requirement only if “no other measures that would bring about timely attainment exist,” or if other measures are “unreasonable or impracticable.” The guidance requires that states do four things in their application for approval of a new or revised SIP: (1) quantify the estimated emissions reductions required to reach attainment with the federal NAAQS for ozone; (2) identify possible control measures that could be used in place of special gasoline blends and provide emissions reduction estimates for those measures; (3) explain why those measures are “unreasonable or impracticable”; and (4) show that, even with use of all “reasonable and practicable” measures, additional emissions reductions are needed. As is the case with other new or revised SIPs, these applications are open for public comment, and EPA must consider those comments before making a decision. Once approved, states’ special gasoline blends become federally enforceable requirements. Under some circumstances, EPA may temporarily waive special gasoline blend requirements, referred to as granting enforcement discretion, if, for example, the required special gasoline blend is not available due to a supply disruption. Over the past several years, EPA has waived the requirement to use these special gasoline blends on several occasions when it determined that overall supplies might become tight. We found that EPA has granted enforcement discretion on at least 23 occasions, allowing gasoline that did not comply with local requirements to be sold there. The causes of these supply disruptions included the 2003 blackout in the Northeast, the series of hurricanes in Florida and the Gulf Coast in 2004, as well as refinery fires, pipeline breaks, and other infrastructure problems. Although there was one short waiver that applied nationwide following the terrorist attacks of September 11, 2001, several of the other waivers were provided to local areas with particularly stringent gasoline formulations including St. Louis, Chicago/Milwaukee, Atlanta, Las Vegas, and Phoenix when there were supply shortages in these areas. All gasoline is a blend of different components that are predominantly produced in refineries. The simplest refineries primarily separate the components already present in crude oil. More complex refineries also have the ability to chemically change less valuable components of crude oil into more valuable ones. Because of their ability to chemically alter components, complex refineries can increase the amount of gasoline yielded from a given amount of crude oil and reduce the amount of less valuable products. Although most refineries can process many types of crude oil, refineries are generally configured to run most efficiently when refining a specific type of crude oil into a specific group of products. Absent specific regulatory requirements, refiners blend several components derived from crude oil to produce a gasoline that achieves acceptable engine performance at the lowest cost. Two key aspects of gasoline affect engine performance: Reid vapor pressure is a measure of gasoline’s tendency to evaporate and also reflects the ease with which it ignites when the spark plug fires in a cold engine. To maintain engine performance, RVP must vary by season and region. Higher RVP is required in colder climates and seasons to allow an engine to start. Octane number is a measurement of gasoline’s tendency to ignite without a spark, commonly known as “knocking” in a running engine. Some high-performance and other vehicles require gasoline with a higher octane number. To satisfy these requirements and consumer demand, retailers in the United States typically sell three different octane grades of gasoline. Special gasoline blends developed to reduce pollution are generally adjusted in at least one of the following ways: RVP is reduced during the summertime to reduce VOC emissions. Reducing the RVP of gasoline requires reducing the amount of very light compounds, such as butanes and pentanes, blended into the gasoline. Toxics, their precursors, or other chemicals are limited so they are not released into the air when the gasoline is burned. Some of these, such as sulfur, naturally occur in crude oil while others, such as benzene, result from gasoline refining. Oxygenates, chemical compounds containing oxygen to aide in combustion, are added to gasoline to improve environmental performance when the gasoline is burned, including reducing carbon monoxide (CO) emissions. The most commonly used oxygenates are MTBE and ethanol. Several states have banned MTBE as a result of concerns about groundwater pollution and have switched to using ethanol as an oxygenate where required. Gasoline is shipped from U.S. refineries to consumers by some combination of pipelines, water barges, rail, and trucks to retail gasoline stations. Most of the country’s refining capacity is located in the Gulf Coast, West Coast, East Coast, or Midwest with only a small amount in the Rocky Mountain states. As shown in figure 1, the Gulf Coast region supplies gasoline to all the other regions—of these, the Midwest and the East Coast are the most dependent on gasoline from the Gulf Coast. The East and West Coast markets have also imported gasoline from other parts of the world such as Canada, Europe, and the Caribbean. Several large pipelines travel inland from refineries in the Gulf Coast, East Coast, and West Coast, connecting these key supply centers to areas where gasoline is used. In general, these large pipelines provide the cheapest method for transporting large volumes of gasoline, and pipelines account for more than half of the gasoline shipments in the United States. Several of the major U.S. pipeline systems, such as the ones serving the Midwest and the East Coast, deliver gasoline and other fuels used in multiple states. Figure 2 shows the pipeline system and the major refineries in the continental United States. The largest concentration of pipeline capacity links the Gulf Coast refining region to the large consumer markets in the Midwest and East Coast, while fewer and smaller pipelines connect refining regions to the more sparsely populated states in the Rockies and parts of the West Coast region. At various points between refining and final retail consumption, gasoline is stored in large tanks, some holding hundreds of millions of gallons of fuel. In many cases, gasoline is stored in tanks at the refinery itself while awaiting shipping. In other cases, fuel is stored at terminal stations located along the pipeline that generally include multiple large tanks. A terminal station serves as a storage facility for gasoline and other petroleum products at places throughout the petroleum refining and transportation process. Some terminals are affiliated with pipelines and used as part of pipeline operations, such as for withdrawals or when pipelines converge. Other terminals are used to allow gasoline and other products to be loaded or off-loaded from barges or tankers. Still other terminals are used to hold gasoline before it is distributed, generally by trucks, to retail gasoline stations. In all of these locations, different gasoline blends must be stored separately, with only one fuel per tank at any given time. Ethanol that is added to gasoline cannot be shipped in pipelines with other petroleum products because of ethanol’s tendency to absorb water. Instead, ethanol is shipped primarily by rail or trucks to terminal stations where it is “splash” blended—mixed in specific proportions as the fuel is added to the storage tank or tanker truck. The federal government and some states have considered requiring or expanding the use of ethanol to reduce consumption of oil and increase demand for agricultural products used to produce it, such as corn. There were 12 distinct gasoline blends in use in the United States during the summer of 2004: 11 special gasoline blends and the conventional gasoline used everywhere a special blend is not used. When different grades of gasoline, special blends used in winter, and other factors are considered, the number of gasoline blends rises to at least 45. New ozone standards and other factors may further increase the number or the use of special gasoline blends in the future, in part because EPA must approve any state’s application to require use of a special gasoline blend as long as the proposed fuel meets EPA’s environmental standards. Eleven special gasoline blends were used in the United States during the summer of 2004 in addition to conventional gasoline. The use of special gasoline blends is most prominent during the summer because special fuels are used predominantly to reduce summer ozone levels, and gasoline use is generally the highest during the summer. The requirement to use these fuels requires that all the fuel sold at terminals meet certain specifications at a certain date, which generally requires terminal operators to draw down their inventory of non-summer fuels in advance of filling their tanks with summer fuels. Special gasoline blends are primarily used in highly populated urban areas, and 34 states use a special gasoline blend in one or more areas. The 11 special gasoline blends in use during the summer of 2004 fell into the following categories: Three different types of RFG used year-round, the federally required fuel used in areas with the worst air quality. RFG has very low RVP; reduced levels of benzene and other toxics; and contains an oxygenate. The type of RFG blend depends on the area of the country where the gasoline is used and the oxygenate selected. These blends are identified in figure 3 as “RFG North,” “RFG North with ethanol,” and “RFG South.” Two types of California Cleaner Burning Gasoline (CBG) used year-round, also referred to as CARB. California CBG is formulated to meet the most stringent gasoline standard in the United States, including very low RVP and reduced levels of sulfur, benzene, and other chemicals. In general, the state of California does not require the addition of an oxygenate in areas not subject to federal RFG standards—identified in figure 3 as “CA CBG.” Gasoline sold in areas also subject to the federal RFG standard must contain an oxygenate, identified as “RFG/CA CBG.” In the summer, Arizona allows the use of either a gasoline blend very similar to RFG or a blend similar to CBG. The blend required in Arizona is identified as “AZ CBG.” Three summer blends with various reductions in RVP. The federal government requires some areas to use 7.8 RVP gasoline and, in other areas, states have mandated the use of this blend. The other two low-RVP blends are state requirements. These blends are identified in figure 3 as “7.8 RVP,” “7.2 RVP,” and “7.0 RVP.” One blend with reduced RVP and reduced sulfur content. The state of Georgia requires this blend for use in the Atlanta area, and it is identified in figure 3 as “7.0 RVP, 30 ppm sulfur.” One blend of conventional gasoline with a minimum of 10 percent ethanol by volume, used year-round. The state of Minnesota requires this blend, which is identified in figure 3 as “Ethanol Mandate.” As figure 3 shows, many areas using special gasoline blends are surrounded by regions that use conventional gasoline. In some cases, these areas are relatively large, as is the case for the state of California, where nearly all of the state uses the same fuel—RFG/CA CBG. In other cases, “islands” of special gasoline use can divide otherwise regional gasoline markets. For example, the St. Louis metropolitan area, which includes parts of two states—Missouri and Illinois—uses three different fuels: one special gasoline blend required on the Missouri side, a different special gasoline blend required on the Illinois side, and conventional gasoline is allowed in the surrounding area. In some cases, special gasoline blends are used in only one area of the country. For example, California CBG, Arizona CBG, and the special blend used in Atlanta, Georgia, are not used anywhere else in the United States. Even relatively common special gasoline blends can create isolated markets if they are not used in nearby areas. For example, although 7.8 RVP is a relatively widely used blend, Pittsburgh, Pennsylvania, is the only city in its region that uses it. Similarly, the Chicago/Milwaukee area uses RFG North with ethanol, a gasoline blend used in the Northeast but not used elsewhere in the Midwest. Special gasoline blends accounted for more than half the gasoline consumed in the United States during the summer of 2001—the last year for which we had complete data. Figure 4 shows the relative consumption of the different gasoline blends then in use. Of the special fuel blends, RFG and 7.8 RVP blends together accounted for about 33 percent of the national gasoline market. California CBG and Arizona gasoline blends accounted for roughly 13 percent of total U.S. gasoline consumption. The remaining 6 percent of gasoline use was divided among four separate blends. California gasoline AZ CBG (3%) CA CBG (3%) RFG/CA CBG (7%) RFG blends RFG North w/ethanol (3%) RFG North (8%) RFG South (10%) While we have reported that there are 11 special blends used or handled during the summer of 2004, additional factors increase the total number of gasoline blends sold in the United States throughout the year to at least 45. First, although this report focuses on summer gasoline blends, at least 3 special winter-only gasoline blends are required to be used in areas of eight states. Use of these fuels requires that fuel terminals in these areas transition from the fuel that they use in the non-winter season to the required winter fuel. These blends contain an oxygenate to address winter carbon monoxide pollution. Second, because of consumer demand, many gasoline stations sell gasoline in three octane grades—both premium and regular grades are refined and shipped to terminals, where they are blended together to make a midgrade gasoline. Therefore, each gasoline blend is effectively two fuels from the perspective of pipelines and terminals. As a result, pipelines, fuel terminals, and retail gasoline stations carry multiple variations of the gasoline blends previously discussed. Third, gasoline blends differ regionally and seasonally because differences in outside temperatures require different blends to maintain vehicle performance. The primary difference among these blends is RVP. Refiners produce gasoline with higher RVP in cold conditions to allow cars to start and gasoline with lower RVP during warm conditions to improve vehicle operation, even in areas that use conventional gasoline. As a result of these differences, refiners routinely ship different fuels to different regions and also ship different gasoline blends seasonally, but special blends tend to compound these variations. One official with a major petroleum company reported that there were at least 45 different grades of gasoline used in the United States. A new ozone standard and deteriorating air quality may lead to an increased number of special gasoline blends and/or more use of these blends in the future. In 2004, EPA issued a final rule implementing a new, more stringent federal air quality standard for ozone that led to the identification of 138 additional counties in nonattainment or maintenance as seen in figure 5. EPA officials that we spoke with did not have any indications that states were planning to submit applications to use special blends in these areas but acknowledged that gasoline is viewed as an effective emissions control strategy and said that they expect some states to consider doing so. Oil company officials told us that officials from some states had approached them to discuss using special gasoline blends. Because states must begin preparing SIPs for the recently designated nonattainment areas, and because several of those states already have chosen to use special gasoline blends, it appears likely that states may seek approval to use such blends in more areas. Several other factors could also affect the number or use of special gasoline blends. State MTBE bans could force more areas of the country to shift from their current blend to an ethanol blend. In June 2004, EPA identified 19 states that had bans on the use of MTBE either in place or scheduled to phase in, though some of these states did not use MTBE. Worsening air quality in areas such as Atlanta and Baton Rouge may require the gasoline used in these cities to shift from a special blend to RFG, reducing the number of fuels. In addition, a new federal standard for all gasoline—including special blends—that mandates reduced sulfur, promises to improve the effectiveness of catalytic converters already present in most vehicles and could aid some areas in meeting federal air quality standards, potentially reducing the need for these fuels in some areas. During the course of our work, staff from EPA’s Office of the General Counsel stated that EPA could not deny an application to require the use of a special gasoline blend that addressed the four elements outlined in EPA’s 1997 guidance. They explained that EPA’s determinations often deferred to states’ evaluations in their applications that, under the Clean Air Act, section 211 (c)(4)(C), no other measures that would bring about timely attainment exist, or that existing measures, such as vehicle inspection and maintenance programs, are unreasonable or impracticable. Further, staff with EPA’s Office of the General Counsel staff told us EPA could not reject an application on the basis of the potential impacts on gasoline supply or other regional effects on the gasoline market because such a rejection would be outside of EPA’s current authority. Several of the special fuels in use during 2004 were approved prior to the issuance of the 1997 guidance, and EPA officials reported that a variety of standards were used to evaluate applications. EPA’s most recent effort to examine special gasoline blends is consistent with EPA’s view that the agency does not have authority to reject a state’s application based on regional supply impacts or costs. In 2001, EPA released a staff white paper, in response to a presidential directive, examining whether there were options to maintain or improve environmental benefits while also improving the supply of fuels, such as gasoline. In that report, EPA examined a number of options to reduce the number of fuels available for states to choose from—similar to a gasoline menu. That report concluded that these options were beyond EPA’s statutory authority and would require legislative action to implement. The white paper also noted that it represented a first step in EPA’s response to the directive, but that significant additional analysis and study were required. EPA staff told us that there had been congressional debate regarding EPA’s authority during consideration of recent energy legislation, but that its authority had not changed as of May 2005. In the study, EPA identified a number of changes that it would make to ease the seasonal transition between gasoline blends used during different parts of the year. Staff also said that little, if any, additional work had been done since the 2001 study, in part because of EPA’s lack of authority to implement some of the actions outlined in the study. Special gasoline blends reduce emissions—particularly those involved in the formation of harmful ground-level ozone—by varying degrees, depending on the blend. The extent of reductions remains unclear, however, because the estimates have not been comprehensively validated through testing on current vehicles and emissions controls. According to EPA and others, these special gasoline blends have contributed to improvements in air quality seen in some parts of the country. The extent of their contribution to improvements relative to that of other contributing factors, such as reductions in power plant emissions, is somewhat uncertain because of the difficulties in isolating the effects of individual emissions reduction efforts, such as special gasoline blends, from other factors that may affect air quality. Over the past 15 years, a wide range of studies by EPA and others have concluded that changes to the properties of gasoline can substantially reduce emissions from automobiles. For example, in 1996, EPA concluded that RFG and low-RVP blends can both significantly reduce VOCs but that RFG offers greater promise in reducing NOx, CO, and toxics. The Air Quality Improvement Research Program (AQIRP), funded by the auto and oil industries, analyzed gasoline properties in detail and comprehensively tested a variety of gasoline blends in a range of vehicles between 1989 and 1992. This effort produced data regarding how the use of various gasoline blends affect emissions from then-current vehicles and concluded that changing certain properties of gasoline, in particular reducing RVP and sulfur, was effective in reducing emissions of pollutants such as NOx, CO, and also hydrocarbons such as unburned fuel. According to EPA officials, using special gasoline blends is attractive to states because the blends can offer immediate emissions reductions from vehicles already on the road. EPA and others have used the results of these studies to develop models that provide detailed emissions estimates for several of the special gasoline blends currently in use. These models have been used by states in their SIPs to estimate the expected emissions from requiring the use of special gasoline blends instead of conventional gasoline. As shown in table 1, the models estimate that special gasoline blends reduce emissions by varying degrees. California’s gasoline—the blend formulated to reduce emissions the most—is estimated to provide the greatest level of emissions reductions, about 25-29 percent for VOCs and about 5.7 percent for NOx. RFG is estimated to provide about the same level of VOC reduction, a lower NOx reduction of about 0.7 percent, but also a 10-20 percent reduction in CO. The special gasoline blend most commonly used in areas not using conventional gasoline—gasoline with an RVP of 7.8—is estimated to reduce VOC emissions by 12-16 percent and NOx by about 0.7 percent. In addition to the pollutants listed in table 1, RFG and California’s cleaner burning gasoline also reduces emissions of some toxics such as benzene. However, the extent of emissions reductions associated with various gasoline blends remains somewhat uncertain. GAO, the National Research Council, and others have identified concerns about the overall accuracy of emissions estimates. EPA has addressed some of the concerns about emissions estimates. In one effort to address concerns about the validity of emissions estimates, EPA sponsored a study that compared emissions estimates to measured emission data obtained between 1992 and 2001. The study looked at pollutant concentration data from tunnels and vehicle exhaust data collected from vehicles on roadways using special remote sensing devices at a limited number of sites using a limited range of gasoline blends. As a result, EPA found that the observed emissions data conflicted with emissions estimates; in some cases the testing data were higher than predicted, while in other cases it was lower. Despite this effort, EPA has not comprehensively studied how various gasoline blends affect vehicle emissions since the early 1990s—when the AQIRP comprehensively tested a variety of gasoline blends in a range of vehicles. Since then, there have been advances in emissions control technology. Consequently, to the extent that emissions from vehicles with newer emissions control technology differ from those of older vehicles, emission estimates may become less certain, especially as vehicles with the newer technology compose a growing portion of the U.S. fleet. EPA officials acknowledge that their efforts since the early 1990s to validate emissions estimates have not allowed them to fully validate how special fuel blends operate in a full range of vehicles of varying vintages and designs over their operating lifetimes. EPA officials told us that they believe such a detailed analysis would improve their understanding of how special gasoline blends affect emissions, but said that they have not had sufficient budgetary resources to collect the needed data to support such an analysis. In addition to these broad concerns, there is also controversy over the emissions benefits associated with special blends containing oxygenates, which were initially added to gasoline to reduce the emissions of carbon monoxide and other pollutants. However, although there appears to be agreement that oxygenated fuels help reduce emissions of CO from older vehicles, recent studies indicate that the emissions benefits for newer vehicles are questionable. For example, AQIRP, the National Science and Technology Council, and others have reported that improvements in emissions controls on newer vehicles, such as oxygen sensors and computer-controlled emissions systems, may now automatically reduce emissions of CO and other pollutants and may negate many benefits of adding oxygenates. Further, some experts have concluded that adding oxygenates to gasoline may increase emissions of NOx and VOCs and may contribute to increased levels of ozone. As a result, some states, including California, New York, and Georgia have requested waivers from EPA to allow them to use fuel that does not contain an oxygenate. The state of California stipulated in its waiver application that its fuel reduces emissions to a greater extent than federal RFG and that the oxygenate requirement has impeded its efforts to reduce ozone. To date, EPA has not granted any of these waivers. Recently, Congress and others have considered expanding the use of ethanol in gasoline for other reasons, including to benefit U.S. farmers and to reduce the country’s reliance on foreign oil. EPA and other experts have concluded that improvements in air quality in some parts of the country are at least partly attributable to the use of special gasoline blends. In 2004, EPA reported that ground-level ozone has decreased over the past 10 to 25 years and that these reductions resulted, at least in part, from emissions control programs that include requirements to use special gasoline blends. Further, EPA and other experts concluded that special gasoline blends, such as RFG and low-RVP blends, are effective strategies for states to use to reduce ozone pollution. In addition, a research effort funded by AQIRP found that reducing RVP decreased peak ozone in several cities and would continue to provide benefits for years to come. In addition, the National Research Council reviewed EPA data and found that average ozone levels dropped by about 1 percent coincident with reduced emissions of VOCs, NOx, and CO from on-road vehicles, which fell by 31 percent, 2 percent, and 20 percent, respectively. Based on these and other data, the National Research Council concluded that improvement in air quality is likely attributable, at least in part, to recent improvements in gasoline properties. Despite the conclusions that special gasoline blends have contributed to improved air quality, findings specifically linking air quality improvement to the use of special gasoline blends are limited and incomplete because of the inherent difficulties in isolating the effects of special gasoline blends from other efforts to improve air quality. Studies examining the effect of special gasoline blends on air quality noted that attributing a change in ozone levels to the use of a special gasoline blend would be difficult. In particular, experts from EPA, the National Science and Technology Council, and the National Research Council have determined that relating trends in the levels of ground-level ozone to trends in emissions and to emissions-control policies can be challenging because of the confounding effects of other variables, including the effects of other control efforts and meteorological fluctuations. For example, the National Research Council noted that since the 1990s—when special gasoline blends became widely used—several other efforts to reduce emissions from vehicles have been made that could also explain changes in air quality, such as the addition of enhanced emissions-control systems and improvements in inspection and maintenance programs in some areas. During this time, EPA and the states have also undertaken efforts to reduce emissions from electric utilities, chemical manufacturing, and other stationary sources that could have contributed to the improvements. Further, because ozone is more readily created when VOCs, NOx, and CO react in sunny and hot weather, meteorological fluctuations affect the relationship between emissions and ozone levels. For example, EPA has identified cases where air quality improved, but the improvement was largely due to better weather (more air circulation, lower amounts of heat and sunlight, and other factors). According to the National Research Council and others, determining how much air quality improvement is specifically attributable to any specific emissions control program, including special gasoline blends, would require the collection of high-quality, long-term data on air pollution, on other control measures, and on weather. The increasing numbers of special gasoline blends have made it more complicated and costly to supply gasoline, elevating the risk of localized supply disruptions. Producing special gasoline blends can require changes at refineries, making it more complicated and costly to produce gasoline. Special blends also add to the number of fuels shipped through pipelines, reducing the efficiency of the pipelines and raising costs. In addition, because the tanks at the fuel terminals were often built before the proliferation of blends, they are often too large and too few to efficiently handle the increased number and smaller size batches of special gasoline blends and, as a result, total storage capacity has fallen. Further, in some cases, the proliferation of blends has reduced the supply options available to some retailers, making them more susceptible to supply disruptions. Producing some special gasoline blends sometimes requires refineries to invest in additional refinery units, making their refineries more complex, or reducing their capacity to make gasoline. For example, producing cleaner-burning fuel with lower levels of toxic and other emissions, such as RFG or CBG, has required some refiners to install specialized units that remove sulfur and benzene during the refining process. Similarly, production of low-RVP gasoline requires that refiners leave out the lightest components typically included in conventional gasoline. Separating these components or converting them to ones that can be used in these blends may require additional refinery units. If the components are not immediately used in gasoline at that refinery, they may be stored, may be used in less valuable fuels such as diesel or jet fuel, or shipped to other refineries that can use these components. The removal and additional processing of these components can decrease the amount of gasoline a refinery can produce. For example, officials from one California refinery told us that their refinery could produce 12 percent more volume if it produced conventional gasoline rather than California gasoline because conventional gasoline uses more of the components that are typically generated in the refining process. Adding refinery units and losing refinery capacity can increase the overall costs of refining gasoline. Manufacturing low-RVP fuel generally involved reducing the use of some components and, as a result, was less costly than the more significant changes needed to make the cleanest burning blends. Specifically, in 1996, EPA estimated that low-RVP blends cost 1-2 cents per gallon more to make than the conventional gasoline at the time. In contrast, in 2003, the Energy Information Administration (EIA), within the Department of Energy, estimated that blends formulated to meet the most stringent standards, such as oxygenated California gasoline, cost 5-15 cents more per gallon to make than the conventional gasoline required at the time and that RFG generally costs 2.5-4 cents more per gallon to make. In addition, the use of oxygenates in blends such as RFG further increases the complexity and cost of the refining process because refiners must either invest in equipment to produce oxygenates from crude oil (in the case of MTBE) or they must purchase these components from other sources. MTBE is generally less expensive than ethanol as an oxygenate but has raised water quality concerns. As described earlier, ethanol is generally shipped by truck or rail, stored separately from other gasoline components, and blended just before gasoline is sent to retail stations. The higher cost of purchasing ethanol during the period of our analysis, together with these separate handling procedures, adds to the total cost of making ethanol-blended gasoline. Additionally, because ethanol has a high RVP, more components must be removed from ethanol-blended gasoline than from MTBE-blended gasoline to meet specifications for RVP. Removing these components and reprocessing them or diverting them to other products increases the cost of making ethanol-blended gasoline. Shipping gasoline on a pipeline requires a great deal of coordination between refineries, pipelines, and terminal stations to maintain pipeline flows while fuels are being added and withdrawn. Pipeline operators told us that they develop schedules of when individual shipments (called batches) will occur at least 1 month in advance; however, some changes to this schedule may occur up to the date when a product is placed on the pipeline to adjust for, among other things, the need for more of a specific gasoline blend in some locations. On the day of shipment, pipeline operators precisely coordinate when refineries or other shippers add or “inject” fuel to the pipeline and when fuel is taken off of the pipeline along with other aspects of operating the system. Companies shipping fuel on the pipeline, may request to keep their products isolated from others (a segregated batch) or may choose to combine their product on the pipeline with other blends meeting similar or identical product specifications (a fungible batch). Because of the large number of gasoline blends and, because some shippers require segregated batches, the number of fuels shipped in pipelines has increased dramatically in recent years. For example, one pipeline company noted that in 1970 they shipped 10 different products on their system over the entire year, whereas in 2004 they shipped 128 (including distinct blends and segregated products). The increased number of special gasoline blends has reduced the effective capacity of the nation’s petroleum products pipeline infrastructure because the pipelines are generally operated at slower speeds to accommodate more and smaller batches of gasoline while keeping the different blends separate. The speed at which centrally controlled pumps move product along pipelines—typically between 3 and 8 miles per hour—can be affected by a number of factors, including the volume of product relative to the pipeline capacity being shipped, the size of batches, and the availability of terminal storage along the pipeline route. Several pipeline operators told us that, prior to the introduction of special gasoline blends, they shipped many fewer products and much larger batches than they do now. Further, they said that shipping smaller volumes can require them to slow or stop the pipeline to allow shippers to inject or withdraw individual fuels at fuel terminals or other locations. Lost opportunities associated with reductions in the amount of fuel that the pipeline can transport serve to raise the average cost of moving gasoline. The increased number of fuels and fuel types shipped on pipelines has also increased losses and costs associated with mixing of fuels. Two types of fuel mixtures occur at the interface between batches on pipelines: downgrading and transmix. Downgrading occurs when two similar fuels mix, but the resulting mix no longer meets the more valuable product specification. For example, if a high- and regular-octane gasoline are mixed, then the downgraded gasoline may be sold only as lower-priced, regular gasoline. Transmix results when two dissimilar fuels mix and the fuel cannot be used without reprocessing. For example, if diesel fuel and gasoline mix, the transmix must be processed to separate the fuels into usable products. Similarly, because MTBE is banned in some areas, if gasoline blends containing MTBE come in contact with other fuels, the mixed fuel is considered transmix and must be reprocessed to remove the MTBE before it can be used. To minimize losses associated with downgrades and transmix and still maintain efficiency, pipelines generally set a minimum batch size. Several pipeline operators reported that they have witnessed increased losses and costs due to downgrades and because more fuel requires reprocessing as the number of special gasoline blends has increased. In addition, according to some pipeline company officials, because some gasoline blends are only used in one city or only in some areas served by a pipeline, shippers incur additional costs if these gasoline blends are not taken off the pipeline at the right location. For example, one pipeline operator told us that RFG with MTBE shipped in Midwest pipelines cannot be used without costly reprocessing if it is shipped past certain points on these pipelines because no regions beyond these points allow the use of RFG with MTBE. In some instances, the pipeline may need to be slowed, or even stopped, to allow a special gasoline blend to be taken out of the pipeline. The increased number of petroleum products generally, including special gasoline blends, and the need to keep them separated, has reduced the storage capacity of some gasoline terminals which can create difficulties during periods when gasoline supplies are disrupted. To ensure product quality, special gasoline blends must be stored in separate tanks. Several terminal operators told us that their terminals were built before the proliferation of special gasoline blends and were designed to handle fewer, but larger, batches of gasoline. Terminal operators told us that, because many of the special gasoline blends are shipped in smaller batches, the tanks used for these blends are often not filled to capacity. One terminal operator told us that some new storage tanks had been built in recent years. This operator went on to say that adding new storage capacity at existing terminals is often either prohibitively expensive or extremely difficult because of space limitations and the need to obtain federal, state, and local regulatory approvals. One terminal operator told us that the company has chosen not to carry one or more gasoline blends used in its area because the company’s existing tanks were insufficient and building additional tank capacity was too costly. For these same reasons, it is often difficult to build new terminals. In addition to the complexity of these factors, terminal operators told us that the proliferation of special gasoline blends also raised their costs by reducing their ability to fully utilize their existing tanks, which cost them the opportunity to store additional fuels, or by forcing them to make additional investment to build more tanks, or both. In addition, terminal operators told us that reduced storage capacity at their facilities, combined with the increased number of fuels in the pipeline system, has made it more difficult to maintain adequate stockpiles of some gasoline blends. Several pipeline operators said that the interval between when a fuel is available from the pipeline may be 10 days or longer if capacity is not available on the pipeline—requiring that many days’ worth of fuel to be stored at the terminal. Increasing demand for gasoline combined with this longer period between shipments, and limited terminal storage, increases the likelihood that some areas will run out of gasoline while waiting for a shipment. One pipeline operator said that the terminals that they served did not run out of gasoline from 1995-1996, but that now one terminal per month runs out of fuel. One terminal operator explained that running out of gasoline can be very harmful to their business because terminal operators rely on retailers and independent gasoline tanker trucks to regularly visit their stations—visits that may not occur if their supplies are inconsistent. In addition, the operator told us that, when tanks are pumped dry and later refilled, they can release up to 1 ton of VOCs, which contributes to pollution. While the terminal operators we spoke with said they are generally able to maintain sufficient gasoline storage, they can run short of some fuels when demand is high or pipeline deliveries are delayed or interrupted. One operator noted that they increase their wholesale gasoline prices as their available supplies fall in an effort to reduce their sales and retain some gasoline for sale and avoid running out. The terminal operators we interviewed did not provide us data on the number of instances when they ran out of gasoline, but they said that the number has significantly increased in recent years. According to operators of independent retail gasoline stations that buy from the wholesale markets, they have more limited supply options as a result of the presence of special gasoline blends. According to an industry representative, some gasoline retailers affiliated with, or owned by, large oil companies (so-called “integrated” oil companies, such as ExxonMobil and ChevronTexaco) receive their gasoline—referred to as branded gasoline—only from these companies, generally paying slightly more for it. However, other companies that are not affiliated with these integrated oil companies, referred to as independent retailers, typically purchase gasoline from a variety of suppliers including, but not limited to, integrated oil companies and typically purchase gasoline at the lowest price available from nearby fuel terminals. As a result of this and other factors, independent retailers said that they generally sell gasoline at a lower price than branded gasoline stations. According to some, the introduction of special gasoline blends may increase the market power of some refiners. In its 2001 white paper, EPA noted that the development of special blends limits competition in the refining sector because some blends are small, and only a few refiners may choose to make some blends. Consistent with this view, independent retailers told us that they have had fewer choices in some markets near where special gasoline blends are required because some refineries and fuel terminals no longer sell gasoline for those markets, and that they have tended to pay higher prices in those areas. For example, one large independent retailer operating retail gas stations on the East Coast told us that the number of refineries producing gasoline for the market they serve fell from 12 to 3 after the introduction of special gasoline blends—leaving the retailer with fewer options to identify the lowest cost supplies. Special gasoline blends have also complicated the ability of some large entities to enter local gasoline markets. Officials with a large company that has entered several local gasoline markets across the country as an independent retailer told us that obtaining sufficient supplies at reasonable prices is more difficult in markets where special gasoline blends are used and that limited supply options have reduced the company’s ability to enter and compete in some of these markets. The plight of independent retailers is particularly pressing when traditional supplies are disrupted. The independent retailers that we spoke with said that their prices generally increase first and that they may not have access to fuel supplies provided to branded retailers if supplies are disrupted. Before special gasoline blends, these independent retailers were able to truck fuel in from nearby cities or neighboring states, however, because some gasoline blends may not be used anywhere else, or they may only be used hundreds of miles away, this is a more difficult and costly option today. For example, several industry officials noted that, if supplies of California gasoline are disrupted, they would expect prices to rise and that it could take weeks for additional supplies to arrive. They said that nearby suppliers capable of blending California’s gasoline blend are generally operating close to their full capacity. In the event that these supplies are disrupted, additional supplies generally come from Western Canada, the Gulf Coast, the Caribbean, or farther away, because there are only a few refineries capable of making this special gasoline blend and, as a result, supplies could take 3 weeks or more to arrive. Among the 100 cities we examined, the highest wholesale gasoline prices tended to be found in cities that used a special gasoline blend not widely available in the region or that is more costly to make than other blends. Cities that are far away from major refining centers or other sources of gasoline also tended to have high prices. Prices also tended to be more volatile in cities having one or more of these characteristics. Other studies have also found higher and/or more volatile prices in some cities that use special gasoline blends. Greater complexity and higher refining, transportation, and storage costs associated with supplying special gasoline blends have likely contributed to increased gasoline prices overall, and for specific special blends, but it is not possible to conclusively determine the extent to which special gasoline blends have caused the higher prices and greater volatility found in specific cities. We examined data from 100 selected cities to determine how prices varied across areas that use special gasoline blends versus conventional gasoline and found that, with some exceptions, the highest and most volatile gasoline prices tended to be found in cities that used special gasoline blends that are uncommon or particularly expensive to make, or in cities that are long distances from major refining areas. Each of these factors tends to isolate a city from the overall gasoline market by limiting the available supplies of gasoline from other areas in the event there is a supply shortfall in that city. With regard to special gasoline blends, the data show that most of the 20 cities with the highest average prices over about the past 4 years (December 2000 through October 2004) used special gasoline blends, most of them formulated to meet stringent emissions standards. In many cases, these cities used a fuel that is not widely used outside their area, or in some cases is unique to that city or state. For example, the five California cities in the data set are all in the top 20 cities with respect to gasoline prices. California’s gasoline is the cleanest-burning gasoline and, in order to make it, California’s refineries have invested substantial capital in new refining processes. Further, only a few refineries outside of California routinely make California gasoline, the closest of which is in Northern Washington. The uniqueness of California’s gasoline has been noted by many sources as likely contributing to California’s high gasoline prices relative to the rest of the country. For the period we examined, the five cities we looked at in California had average prices ranging from about 24 to 26 cents per gallon more than the city with the lowest price (Meridian, Mississippi), which uses conventional gasoline and is located near the large refining center in the Gulf Coast. The table in appendix II shows the price data and gasoline blend types for each of the 100 cities we evaluated. Some of the cities with the highest prices used conventional gasoline year-round, but most of these are far from major refining areas or are located on or near a single smaller pipeline. Average prices in these top 20 cities were between 14 and 41 cents per gallon more than in the city with the lowest price. Using ethanol as an additive to gasoline is associated with higher wholesale gasoline prices. To evaluate this, we examined national average prices for gasoline blends containing ethanol. For example, for the nation as a whole, average prices for conventional gasoline with ethanol were about 4 cents per gallon higher than conventional without ethanol over the time period we analyzed. The switch to using ethanol, as opposed to MTBE, was also associated with higher gasoline prices. For example, in the years 2001-2003, during which California phased out MTBE and phased in ethanol, the average summer price of gasoline with ethanol was between about 4 and 8 cents per gallon more than the price of gasoline with MTBE. Similarly, over the period 2001-2004, the average summer price for federal reformulated gasoline with ethanol was between about 6 and 13 cents per gallon more than for federal reformulated gasoline with MTBE. In contrast to the highest-priced cities, the 20 cities with the lowest average wholesale gasoline prices over the period typically used common gasoline blends and/or were located near a major refining center—most often near the Gulf Coast, the largest refining center in the country in terms of both numbers of refineries and total refining capacity. For example, among the 20 cities with the lowest prices, 8 used conventional gasoline—the most widely available gasoline blend. Conventional gasoline is used extensively across the United States, and most cities that use it are surrounded by areas using the same gasoline. Another 9 cities with the lowest prices used 7.8 RVP gasoline—the most widely used of the special blends and the one formulated according to the least stringent emissions standards. Most of the 7.8 RVP gasoline is used in areas close to the Gulf Coast refining center. In addition, refiners told us that making 7.8 RVP gasoline is simpler and less costly than some of the other blends, so it may be more available from refineries in the event of a local supply shortfall. The other 3 cities with the lowest prices used less common special blends but are all close to the largest refining center, the Gulf Coast and, therefore, have many more potential supply options than more isolated cities do. We found similar results with regard to the volatility of gasoline prices. For example, 18 of the 20 cities with the most volatile prices used special blends of gasoline, and many of these cities were also among the highest-price cities. In contrast to the cities with relatively high price volatility, 17 of 20 cities with the lowest volatility use either conventional or 7.8 RVP gasoline. However, while prices for special blends tend to be higher and more volatile than prices for conventional gasoline, available data did not allow us to attempt to isolate the effects of specific special gasoline blends on gasoline prices or to definitively establish a causal link between specific special blends and price volatility. Specifically, we did not have sufficient data to control for all other potential contributing factors—such as the distance from cities to the sources of gasoline supply, or other specific features of these cities that might influence prices regardless of the blend of gasoline used. We reviewed the literature associated with special gasoline blends and gasoline prices and found a number of studies done by government, academic, and private entities. The results and conclusions of these studies were largely consistent with our findings. For example, a recent EPA study found that high prices and price volatility are most acute in isolated markets, particularly those using special gasoline blends. The study also pointed out that some states had adopted specific gasoline blends in an attempt to use a blend that had a lower refining cost than federal reformulated gasoline. EIA also studied these blends and concluded, among other things, that the increasing number of distinct gasoline blends has reduced the flexibility of the supply and distribution system to respond to unexpected changes in supply and demand for gasoline. EIA further pointed out that, in some cases, states have chosen low RVP gasoline blends in an attempt to achieve lower gasoline prices than if they had used federal reformulated gasoline, and they inadvertently may have added strain to the distribution system, leading to greater potential for price volatility. A number of other academic and private studies found similar results. There is a broad consensus among the experts and others we spoke with that the proliferation of special gasoline blends have contributed to increased and more volatile gasoline prices. The studies we reviewed also came to similar conclusions. Further, the greater complexity and higher refining, transportation, and storage costs associated with supplying special gasoline blends have almost certainly resulted in increased prices or volatility, either because of more frequent or severe supply disruptions, or because higher costs are likely passed on, at least in part, to consumers. For example, depending on the pipeline company, costs associated with downgrades or transmix are recovered from customers. At least part of these costs are, in turn, likely to be passed down the supply chain and eventually to consumers of gasoline. Similarly, the costs incurred to install new processes to make special gasoline blends are likely passed on, at least in part, to consumers because refining companies would not make these investments without a reasonable expectation of a return on their money. While it is, therefore, almost certain that special gasoline blends have been a contributing factor to higher gasoline prices, it is not possible with the data available to us to conclusively determine the extent to which these blends have caused the higher prices and greater volatility found in specific cities or to rule out other potentially contributing factors. Such other factors may include specific supply infrastructure problems in or around these cities that would impact gasoline prices regardless of the blend. For example, state and industry officials in California told us that marine terminals for off-loading gasoline and other petroleum products are in short supply in California, which constrains the ability of suppliers in the state to receive these products from outside the state in the event of a local supply shortfall. These constraints would potentially contribute to higher gasoline prices regardless of which blend is used. Another potential factor that might influence gasoline prices independently of gasoline blends is the level of competition in the petroleum products industry. For example, in a recent GAO report, we found that oil company mergers had contributed to a 1 to 2 cent per gallon increase in conventional gasoline prices in the 1990s and as high an increase as 7 cents per gallon for California’s special gasoline blend. In addition, there may be other such factors at play that we do not observe, so we cannot definitively determine the precise extent to which observed prices are the result of the proliferation of special gasoline blends. Special gasoline blends have reduced emissions and helped contribute to improved air quality in some parts of the country. Using special gasoline blends to achieve air quality standards is attractive to states; the blends offer immediate reductions in emissions from all vehicles already on the road by varying degrees. Unfortunately, EPA’s knowledge about the emissions generated when special gasoline blends are burned is outdated. Much has changed regarding vehicle and emissions control technologies since special gasoline blends, including those with ethanol, were last comprehensively tested in automobile engines. However, EPA and the states continue to rely on models built largely around these dated findings when evaluating whether to allow states to use special blends as a component in their efforts to improve air quality. Given the significant changes in vehicles and fuels, EPA should have better information about how the current fuels affect the vehicles currently on the road. In addition, Congress should have better information regarding the effectiveness of these blends, particularly those containing oxygenates such as ethanol, to aid in setting policy on fuel blends and the use of oxygenates. Although special blends have helped reduce emissions and improve air quality, the introduction of these blends appears also to have divided the gasoline market, converting what had been closer to a single national commodity market, into islands of smaller and more local markets for blends of gasoline that are typically not interchangeable. Because of octane, seasonal, and other differences, each additional special blend that is added can require pipelines and fuel terminals to handle several additional blends. Overall, this transformation of the gasoline market has complicated the supply infrastructure, increased production and delivery costs, and reduced the availability of gasoline, in some cases. The impacts of the proliferation of special gasoline blends are most evident when there is a disruption in the supply chain, such as when a refinery or pipeline is shut down. In these instances, localities using a blend different from the gasoline used in nearby areas must seek replacement supplies from farther away, leading to delays that likely cause higher and longer price spikes until these supplies arrive. Overall, it is likely that gasoline prices are higher now than they would be if gasoline were closer to a single commodity. In light of the opposing effects of environmental benefits and negative market implications, an ideal policy for approving the use of special gasoline blends would balance these effects. However, each decision involves trade-offs that all stakeholders may not value equally. Specifically, different stakeholders may attach varying degrees of importance to the environmental benefits or the impacts on gasoline supply infrastructure. Further, individual state actions that impact the entire regional supply infrastructure may not fully take those impacts into account or, in some cases, even accurately predict the impact on their own gasoline supply. With the 8-hour ozone rule and other regulatory changes likely to lead to more applications to use special gasoline blends, balancing the emissions effects of specific gasoline blends against the implications for supply and price will be even more important in the coming years. While EPA is currently authorized to approve state applications to use special gasoline blends, the agency cannot effectively weigh environmental and supply considerations because it does not have authority to deny state requests to use these blends on the basis of regional supply or price considerations and because its information on the environmental benefits is dated. To provide a better understanding of the emissions impacts of using special gasoline blends and these blends’ impacts on the gasoline supply infrastructure, we recommend that the EPA Administrator direct the agency to take the following four actions: (1) work with states and other stakeholders to comprehensively analyze how various gasoline blends affect the emissions of vehicles that comprise today's fleet, including how overall emissions are affected by the use of ethanol and other oxygenates; (2) use this updated information to revise the emissions models that states use to estimate the emissions and air quality benefits of these fuels and provide this information to Congress; (3) work with states, the Department of Energy, and other stakeholders to develop a plan to balance the environmental benefits of using special gasoline blends with the impacts on gasoline supply infrastructure and prices, and report the results of this effort to Congress; and (4) work with the states, the Department of Energy, and any other appropriate federal agencies to identify what statutory or other changes are needed to achieve this balance and report these findings to Congress and request that Congress provide these authorities to the appropriate federal agency or agencies. We provided a copy of our draft report to EPA for comment. The agency did not comment on our findings or recommendations but did provide technical comments that we have adopted, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to other appropriate congressional committees and the Administrator of EPA. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in appendix II. To determine the extent to which special gasoline blends are used in the United States and how, if at all, this use is expected to change in the future, we reviewed related literature, reviewed data on the use of these fuels, and interviewed government and other officials. Specifically, we reviewed reports on the presence and use of special gasoline blends by the Environmental Protection Agency (EPA), the Energy Information Administration (EIA), and others. We also examined data on the use of special gasoline blends provided by EPA, ExxonMobil (a commonly mentioned source of information on use of special gasoline blends), the Oil Pipeline Information Service, state environmental agencies and others. In addition, we interviewed federal and state government officials, academic and industry experts, and industry officials. Specifically, we interviewed officials with the EIA and EPA in Washington, D.C., as well as officials with EPA’s Office of Transportation and Air Quality in Ann Arbor, Michigan, and officials in each of the 10 EPA regional offices. We also interviewed representatives from industry trade associations including the American Petroleum Institute, the Renewable Fuels Association, the National Petrochemical Refiners Association, the Association of Oil Pipelines, the National Association of Convenience Stores, the Alliance of Automobile Manufacturers, and the Society of Independent Gasoline Marketers and with representatives from the National Governors Association. In addition, we interviewed academic and industry experts, and industry officials from companies involved in refining, terminal operations, and pipeline operations, as well as from large oil companies. We also conducted site visits in California, Louisiana, New Jersey, Pennsylvania, and Texas—states with large refining sectors and/or organizations with experience with producing and using special gasoline blends. To document what EPA and others have determined regarding the role of special gasoline blends in reducing vehicle emissions and improving overall air quality we reviewed related literature, interviewed federal, state, and other officials, and examined emissions estimates provided by EPA. Specifically, we examined reports on the emissions impacts of special gasoline blends done by EPA, the Auto/Oil Air Quality Improvement Research Program (AQIRP), National Research Council, state environmental agencies, and others. In addition, we interviewed federal and state government officials, academic and industry experts, and industry officials. Specifically, we interviewed federal officials at EPA and EIA, staff at state environmental offices, researchers associated with the National Academies of Science and the National Research Council, representatives from industry trade and health advocacy associations, including the American Petroleum Institute, the Renewable Fuels Association, the National Petrochemical Refiners Association, the Association of Oil Pipelines, the National Association of Convenience Stores, the Alliance of Automobile Manufacturers, the Society of Independent Gasoline Marketers, and the American Lung Association. In addition, we interviewed academic and industry experts, and industry officials from companies involved in refining, terminal operations, and pipeline operations, as well as from large oil companies. To assess the reliability of emissions analyses, we reviewed the analyses’ overall design and methodologies, including assumptions and inputs to modeling. Automobiles emit a number of harmful pollutants; however, some have been identified as potentially more significant than others. The Clean Air Act authorizes EPA to mitigate potentially harmful concentrations of major criteria pollutants, including carbon monoxide (CO), nitrogen dioxide (NO2), sulfur dioxide (SO2), ozone (O3), particulate matter (PM) and lead (Pb). GAO focused its analysis on VOC, NOX—important precursors to ozone—and CO emissions because the transportation sector is responsible for a large fraction of VOC, NOx, and CO emissions in the United States and, as a result, the Clean Air Act and EPA have specified the reduction of these pollutants through fuel control programs. To identify what effects, if any, special gasoline blends have on gasoline supply in the United States, we examined literature reporting on the effects of special gasoline blends on gasoline supply, interviewed government officials and a wide cross section of industry participants. Specifically, we interviewed agency officials with EPA, EIA, the Federal Trade Commission, and state regulatory agencies. In addition, we interviewed representatives from industry trade associations, including the American Petroleum Institute, the Renewable Fuels Association, the National Petrochemical Refiners Association, the Association of Oil Pipelines, the National Association of Convenience Stores, the Alliance of Automobile Manufacturers, and the Society of Independent Gasoline Marketers. We also interviewed petroleum industry officials from companies involved in refining, terminal and pipeline operations, and marketing, including interviews with senior industry officials from several integrated oil companies such as ExxonMobil, ChevronTexaco, five operators of large pipeline systems that carry multiple gasoline blends, several operators of terminals, and three large independent marketers of gasoline that buy wholesale gasoline and sell it to retail customers. We also conducted site visits in California, Louisiana, New Jersey, Pennsylvania, and Texas—states with large refining sectors and/or organizations with experience with producing and using special gasoline blends. To determine how these blends affect gasoline prices, we examined the literature on gasoline prices, interviewed industry officials and experts, and analyzed wholesale gasoline price data. We reviewed reports on the use of specials gasoline blends and gasoline prices done by EPA, EIA, and others. We also interviewed government officials and industry experts including federal officials at EPA and EIA; staff at state environmental offices; academic and industry experts; petroleum industry officials from companies involved in refining, terminal operations, and pipeline operations, as well as from large oil companies; and representatives of trade associations. In addition, we evaluated data on wholesale gasoline prices in 100 cities provided by the Oil Price Information Service (OPIS), as well as data on national average prices from the same source—these national data covered all the terminals in the country for which OPIS collects data. The data were weekly average prices from terminals selling gasoline at wholesale and covered the period from December 2000 through October 2004. In choosing which cities to evaluate, we first selected all cities on major pipelines. Then we selected the largest cities in each state and in each contiguous area that used a special gasoline blend. In so doing, we chose at least one such city from each contiguous area in the United States that we determined used a special blend of gasoline. Then, we chose cities in areas that use conventional gasoline, using similar criteria—every conventional-gasoline city chosen was the largest city in its respective state that was on a major pipeline. We did not estimate an econometric model to try to isolate the effects of specific special blends because we felt we lacked sufficient data to control for all other potential contributing factors—such as specific features of these cities that might influence prices regardless of the blend of gasoline used or the degree of competitiveness in the gasoline supply industry. Instead, we ranked the 100 cities according to the mean of their gasoline prices to determine if there were consistent patterns with respect to areas that use special gasoline blends versus areas that use conventional gasoline. To calculate the mean, we first created price differentials between each week’s price in each city and the price per gallon of West Texas Intermediate crude oil—a commonly used benchmark for world crude oil prices. These crude oil prices came from Platts, a common source for crude oil and petroleum product prices. For each city, we performed a statistical test comparing the average prices between each city and two comparison cities in Texas. We also ranked the cities according to the standard deviations of their prices over time and looked for similar patterns. To calculate the standard deviations, we again created price differentials between each week’s price in each city and the price per gallon of West Texas Intermediate crude oil. Creating a differential between gasoline and crude oil prices controls for some volatility in gasoline prices that is caused by changes in the price of crude oil, the fundamental raw material input in gasoline. Then, we calculated the standard deviation over time for each city for these price differentials. The standard deviation is a common measure of the variability of data and, in this case, is a measure of how much the prices in each of the cities varied over time, controlling for crude oil prices. For each city, we performed a standard test for statistical significance of the difference of the variability between that city and the city with the lowest standard deviation. In addition to the individual named above, Mark Bondo, Jon Ludwigson, Kristen Massey, John Mingus, Cynthia Norris, Frank Rusco, Barbara Timmerman, and Kim Wheeler-Raheb made key contributions to this report. In addition, important contributions were made by Diane Lund, Dawn Shorey, and Mary Welch.
The Clean Air Act, as amended, requires some areas with especially poor air quality to use a "special gasoline blend" designed to reduce emissions of volatile organic compounds (VOC) and nitrogen oxides (NOx) and requiring the use of an oxygenate such as ethanol. In less severely polluted areas, the Act allows states, with EPA approval, to require the use of other special blends as part of their effort to meet air quality standards. GAO agreed to answer the following: (1) To what extent are special gasoline blends used in the United States and how, if at all, is this use expected to change in the future? (2) What effect has the use of these blends had on reducing vehicle emissions and improving overall air quality? (3) What is the effect of these blends on the gasoline supply? (4) How do these blends affect gasoline prices? Although there is no consensus on the total number of gasoline blends used in the United States, GAO found 11 distinct special blends in use during the summer of 2004. Further, when different octane grades and other factors are considered, there were at least 45 different kinds of gasoline produced in the United States during all of 2004. The 11 special blends GAO found are often used in isolated pockets in metropolitan areas, while surrounding areas use conventional gasoline. The use of special blends may expand because a new federal standard for ozone may induce more states to apply to use them. To date, the Environmental Protection Agency (EPA) has generally approved such applications and does not have authority to deny an application to use a specific special blend as long as that blend meets criteria established in the Clean Air Act. EPA staff told us that there had been recent congressional debate regarding EPA's authority with regard to approving special gasoline blends but that the bills had not passed. EPA models show that use of special gasoline blends reduces vehicle emissions by varying degrees. California's special blend reduces emissions the most--VOCs by 25-29 percent, NOx by 6 percent compared with conventional gasoline, while also reducing emissions of toxic chemicals. In contrast, the most common special gasoline blend (used largely in the Gulf Coast region) reduces VOCs by 12-16 percent and NOx by less than 1 percent compared with conventional gasoline. The extent of reductions remains uncertain, because they rely, at least in part, on data regarding how special blends affect emissions from older vehicles, and these estimates have not been comprehensively validated for newer vehicles and emissions controls. Regarding air quality, EPA and others have concluded that improvements are, in part, attributable to the use of special blends. The proliferation of special gasoline blends has put stress on the gasoline supply system and raised costs, affecting operations at refineries, pipelines, and storage terminals. Once produced, different blends must be kept separate throughout shipping and delivery, reducing the capacity of pipelines and storage terminal facilities, which were originally designed to handle fewer products. This reduces efficiency and raises costs. In the past, local supply disruptions could be addressed quickly by bringing fuel from nearby locations; now however, because the use of these fuels are isolated, additional supplies of special blends may be hundreds of miles away. GAO evaluated pretax wholesale gasoline price data for 100 cities and generally observed that the highest prices tended to be found in cities that use a special gasoline blend that is not widely available in the region, or that is significantly more costly to make than other blends. There is general consensus that increased complexity, and higher costs associated with supplying special blends, contribute to higher gasoline prices either because of more frequent or severe supply disruptions or because higher costs are likely passed on at least in part to consumers.
The August 2000 DOT&E report summarized the progress, up to that date, of the National Missile Defense program and the adequacy of testing in the context of a deployment decision. At the time, the development program revolved around a series of ground and flight tests and was to have culminated in an initial operational capability by the end of fiscal year 2005. Formal test documentation called for a total of 16 integrated flight tests (system-level intercept attempts) through 2004 with three additional flight tests during Initial Operational Test and Evaluation during the 2005 fiscal year. DOT&E’s principal finding was that ground and flight tests completed up to that time did not provide results of sufficient fidelity to support a deployment decision. Indeed, when the deployment readiness review was held, there had been two failed intercepts out of three attempts. Furthermore, as stated in the DOT&E report, ground testing was not adequate to yield credible estimates of GMD system performance. DOT&E indicated that the current test program required augmentation and probably significant funding increases to demonstrate an operationally effective system for deployment. Accordingly, the report included a list of detailed recommendations for enhancing the test program. DOT&E made 50 specific, interrelated recommendations, which we organized into the following four overarching categories: Flight Testing, Ground Testing, Target Discrimination, and Programmatics. Although DOT&E categorized discrimination-related recommendations under the flight-testing and ground-testing categories, we created a separate category because discrimination was of principal concern to DOT&E at the time. DOD classified the full text of the recommendations. A detailed assessment indicating whether actions have been initiated by MDA and what their timing is relative to the September 2004 initial defensive capability date can be found in our June 2003 classified report on this subject. A summary of MDA actions to address the DOT&E recommendations is provided below. Integrated flight tests of the GMD element are demonstrations of system performance during which an interceptor is launched to engage and intercept a target reentry vehicle (mock warhead) above the atmosphere. Many recommendations (20 of 50) in the DOT&E report pertain to aspects of integrated flight testing, such as deficiencies in flight-test complexity, operational realism, and artificialities. DOT&E’s concerns with the composition of target suites in flight tests for testing discrimination are discussed separately in the discrimination section of this report. DOT&E reported that increasing the scope of flight testing was essential to stress the limits of system design and to keep pace with system development. MDA is taking actions that address many of the shortcomings in flight testing DOT&E identified in its August 2000 report. Indeed, the development of the BMDS Test Bed—the agency’s key instrument for enhancing the existing test infrastructure to provide more realistic testing—should go far in addressing these DOT&E recommendations over the long term. Currently, flight tests are limited to target launches out of Vandenberg Air Force Base, California, and interceptor launches out of Kwajalein Missile Range in the western Pacific. For enhancing the capabilities of integrated flight testing, the test bed adds an interceptor launch site at Vandenberg Air Force Base; target launch facilities at Kodiak Launch Complex, Alaska; a GMD fire control node at Fort Greely, Alaska; an upgraded early warning radar at Beale Air Force Base, California; upgraded communication links among test bed components; and test infrastructure to support five additional intercept regions. The ship-based Aegis AN/SPY-1 radar is also available as a forward-deployed asset for early target tracking. In addition, the design and construction of a sea-based X-band radar, which would be positioned on a mobile platform in the Pacific, has been funded by MDA and is scheduled to be available for test bed utilization in late 2005. Other components of the BMDS Test Bed such as the Cobra Dane radar in Shemya, Alaska, and interceptors at Fort Greely will not actively participate in integrated flight tests at least through September 2007. Several August 2000 DOT&E recommendations call for integrated flight testing with Category B engagements and scenarios with multiple threatening reentry vehicles, both of which are expected to be common during operational missions. In addition, the recommendations call for integrated flight testing to be performed under increasingly difficult conditions and to be made more challenging through, for example, testing under various solar and weather conditions. Our analysis of the GMD test program as it pertains to flight test complexity, based on the March 2003 Developmental Master Test Plan for the GMD element and related program documentation, is summarized below. Flight Test Complexity—Actions Taken or Planned. The GMD test plan calls for Category B engagements beginning with Integrated Flight Test 15 (IFT-15), scheduled for the fourth quarter of fiscal year 2004. Furthermore, it indicates that Category B engagements would be a common occurrence of flight testing, because the weapon task plan would be generated from Beale or Aegis radar data. According to MDA officials, however, the decision to conduct future flight tests under Category B engagements is currently under review; the resolution will depend on the individual flight test scenario and the maturity of battle management assets. The GMD Developmental Master Test Plan also shows that an integrated flight test (designated IFT-22/23) in which two interceptors are launched against two attacking reentry vehicles (multiple simultaneous engagements) will be carried out in fiscal year 2007. Flight Test Complexity—Actions Not Taken or Planned. Although previous flight tests have been conducted under limited adverse conditions (light rain), flight tests to assess the actual effects of severe weather on system performance are not currently planned. According to the program office, the verification of system performance in adverse weather will be achieved through modeling and simulation grounded in technical measurements and flight test data. Furthermore, a nighttime engagement was attempted during IFT-10 (December 2002), but the failure of the kill vehicle to separate from the surrogate booster precluded collection of any applicable data. The recommendations on operational realism reflect limitations of the current test range. Currently, intercept tests are constrained to a single corridor and intercept region—target launches out of Vandenberg Air Force Base and interceptor launches out of the Reagan Test Site. As a result, flight-test engagement conditions are limited to those with low closing velocities and short interceptor fly-out ranges. DOT&E called for an expansion of engagement conditions and suggested adding more intercept regions and launch locations to achieve new intercept geometries, higher closing velocities, and longer ranges flown by the interceptor during flight testing. Operational Realism—Actions Taken or Planned. The expansion of the test range in the Pacific with the development of the BMDS Test Bed will have a significant impact on achieving operational realism in integrated flight tests. The Block 2004 Test Bed adds five intercept regions, target launches out of Kodiak Launch Complex, and interceptor launches out of Vandenberg Air Force Base. The combination allows for flight tests with new intercept geometries, additional crossing angles, higher closing velocities, and longer ranges flown by the interceptor. For example, IFT-15 (fourth quarter of fiscal year 2004) will be conducted with a target launch out of Kodiak, and IFT-17 (fourth quarter of fiscal year 2005) will be the first test with an interceptor launched from Vandenberg. Operational Realism—Caveats. The principal caveat to the associated MDA actions addressing operational realism is timing. By September 2004, one of the five new intercept regions, north of Reagan Test Site, will have been exercised. The remaining new intercept regions will not be exercised until after September 2004. For example, the two intercept regions off the west coast of the United States will be used in IFT-17 (fourth quarter of fiscal year 2005) and IFT-18 (fourth quarter of fiscal year 2005), respectively. A fourth intercept point will be exercised in IFT-21 (third quarter of fiscal year 2006). Finally, the fifth intercept point will be exercised as part of the multiple simultaneous engagement to be conducted in fiscal year 2007. The DOT&E recommendations on flight test artificialities—such as the removal of surrogates (test range assets emulating operational assets)— also reflect limitations of the current test range. The most artificial surrogate noted in the August 2000 DOT&E Report was the placement of a C-band transponder on the target reentry vehicle. The transponder was essential for the execution of flight tests, because in conjunction with the test range radar (designated FPQ-14), there were no other non-artificial options available to track the reentry vehicle with sufficient accuracy for executing the mission. DOT&E argued that this artificiality be phased out and, in general, recommended the system utilized in integrated flight tests be as functional and representative as possible. Artificialities—Actions Taken or Planned. Use of the transponder/FPQ-14 radar combination as a surrogate radar for midcourse tracking is planned to be phased out. Indeed, IFT-15 (fourth quarter of fiscal year 2004) would be the first test that does not use this surrogate for mission execution. Rather, in integrated flight tests IFT-15 and beyond, midcourse tracking of the target suite would be achieved through the use of the Beale upgraded early warning radar or, pending ongoing analysis by GMD, the Aegis SPY-1 radar. The eventual use of the sea-based X-band radar beginning in late 2005 can also be used for midcourse tracking. The removal of other surrogates is under way. For example, the short-range surrogate interceptor booster, which has been used in all flight tests to date, is scheduled to be replaced with two more operationally representative boosters beginning with IFT-14 (third quarter of fiscal year 2004). Artificialities—Actions Not Taken or Planned. The MDA is not currently considering conducting flight tests under unrehearsed and unscripted conditions. Overall, the current DOT&E has looked favorably on MDA’s actions that address its recommendations, because the GMD test infrastructure is being significantly enhanced to allow for more flight test complexity, operational realism, and artificialities. We noted, however, that since DOT&E’s August 2000 assessment, MDA has reduced the extent of the flight test program, as follows: Integrated Flight Tests—Number of Cancellations. During the initial planning phases of the revised test program, MDA considered conducting four intercept attempts per year. But after considerable planning and contract evaluations, MDA limited the flight test program to no more than three intercept attempts per year because of overlapping test objectives and funding constraints. Significantly, the previous GMD test program at the time of the deployment readiness review called for a total of 19 integrated flight tests to be carried out through fiscal year 2005. The current test program, however, now has a total of 12 integrated flight tests through fiscal year 2005—because of the cancellation of IFT-11, 12, and 16, and the conversion of IFT-13 to booster tests (IFT-13A and 13B). To date, 8 of the 12 have been completed under largely the same test conditions that were critically assessed by DOT&E. In short, only two flight tests under improved test conditions with more representative hardware are planned to be conducted before September 2004, the time at which the initial defensive capability is scheduled to become available. Operational Testing—No Longer Required. The previous GMD test program also called for operational testing—Initial Operational Test and Evaluation—by the military services. Operational testing is a statutory requirement for DOT&E to independently determine operational effectiveness and suitability of a deployed system for use by the warfighter. MDA does not plan to operationally test the Block 2004 GMD element before it is available for initial defensive operations. The September 2004 fielding is not connected with a full-rate production decision that would clearly trigger statutory operational testing requirements. Nonetheless, the Combined Test Force, a group of users and developers, plans tests to incorporate both developmental and operational test requirements in the test program. The 13 ground testing recommendations formulated by DOT&E in its August 2000 report are focused concerns encompassing four areas: (1) realistic testing of kill vehicle functions in a Hardware-in-the-Loop (HWIL) facility, (2) ground-based lethality testing, (3) development of the system-level simulation known as the Lead System Integrator Integration Distributed Simulation (LIDS), and (4) Operations in a Nuclear Environment (OPINE) testing of kill vehicle components. In general, DOT&E’s recommendations on ground testing are not being addressed. A number of the August 2000 DOT&E ground testing recommendations pertain to the hardware-in-the-loop testing of the kill vehicle built by Raytheon. For example, a test article is placed in an evacuated chamber to simulate an exoatmospheric environment, and infrared radiation of a simulated target scene is projected onto the kill vehicle’s sensors. DOT&E recommended “that an innovative new approach needs to be taken towards hardware-in-the-loop testing of the kill vehicle, so that potential design problems or discrimination challenges can be wrung out on the ground in lieu of expensive flight tests.” DOT&E stated that, in order to verify kill vehicle performance, kill vehicle testing should be executed using actual unit hardware in a hardware-in-the-loop facility capable of providing a realistic space environment and threat scene. MDA had taken steps to proceed with the design and construction of a hardware-in-the-loop laboratory at the Arnold Engineering Development Center, Tullahoma, Tennessee. Although an initial test capability had been planned for the 2004 time frame, testing at the Arnold Engineering facility has been deferred beyond Block 2004 based on Test Bed funding constraints. In response to a draft of this report, MDA stated that future investments and test events at this facility are subject to MDA internal management trade-offs among the numerous priorities associated with the whole missile defense program portfolio. DOT&E made recommendations in its August 2000 report for improving GMD lethality testing—testing aimed at assessing a kill vehicle’s effectiveness in destroying a reentry vehicle. Current test plans call for an approach whereby ground-based experiments are conducted to collect data to anchor simulations, which in turn are used to assess lethality performance. Indeed, GMD expects to anchor such simulations from data derived from improved “sled testing,” which uses full-scale targets in the newly developed Holloman Air Force Base Hypersonic Upgrade Program facility. However, there are no plans to conduct intercept flight tests of the interceptor’s ability to destroy threat representative targets that fulfill the Live Fire Test and Evaluation requirements. Rather, hit point information is collected from actual intercept tests, which, in turn, is used as input to simulations to determine whether the impact was lethal. Another area of ground testing recommendations identified in the August 2000 DOT&E report concerned the development and use of system-level digital simulations. During the time of the deployment readiness review, the prime contractor’s principal tool for assessing system performance over a broad range of scenarios was the end-to-end digital simulation known as LIDS. Because the development of the simulation was behind schedule and unavailable to support analyses of overall system performance, DOT&E reported that results obtained from it should not be used in making a deployment decision. DOT&E recommended that LIDS capability be “evolved to a fully validated, high-fidelity simulation.” In addition, DOT&E recommended that LIDS be made flexible enough to permit independent use by test agencies. MDA disagrees with the recommendations pertaining to LIDS. MDA views LIDS as one of many tools to analyze performance aspects of the GMD element and does not believe that LIDS needs to be developed to the level expected by DOT&E. According to the agency, a baseline of models and simulations are available that are intended to collectively support the entire range of analysis required to verify the capabilities of the GMD elements. Furthermore, MDA asserts the evolution of LIDS from Software Build 4 to its current Software Build 6.1.0 has improved the flexibility of the system to allow for sensitivity analyses by government users. According to MDA, extensive analysis using LIDS has been conducted at the Joint National Integration Center at Shriever Air Force Base, Colorado. Finally, the remaining ground testing recommendations identified in the August 2000 DOT&E report focus on OPINE testing, which refers to the operation of individual GMD components in environments induced by nuclear explosions. Details can be found in the classified version of this report. Target discrimination is a critical function of a missile defense engagement that requires the successful execution of a sequence of functions, including target detection, target tracking, estimations of physical characteristics of tracked objects, and data fusion. DOT&E had two overarching concerns with the operational testing of the discrimination function: Capability against diverse threats. Fundamentally, successful target discrimination requires that the defense be able to anticipate many characteristics of the threat. DOT&E, therefore, was concerned that discrimination algorithms may not be sufficiently robust to handle unanticipated threat scenes. The quality and quantity of information known prior to testing. DOT&E was concerned that every physical property of target objects is known with unrealistic accuracy in advance of flight tests. Twelve of 50 recommendations in the August 2000 DOT&E report pertain to the testing of the discrimination function. Specifically, DOT&E recommended adding challenging yet unsophisticated countermeasures to the target suites of integrated flight tests. DOT&E also recommended integrating countermeasures developed by the Countermeasures Hands- On Program (CHOP) into target suites of integrated flight tests. Finally, DOT&E recommended executing flight test events—either intercept attempts or risk reduction flights—that have a “pop quiz” component with respect to radar discrimination. Operationally, this type of flight test is more representative of a true tactical engagement, because the exact composition and type of countermeasures flown in an actual engagement are generally unknown. Details can be found in the classified version of this report. Relative to the previous test program, MDA has substantially increased the scope of work being done in discrimination. MDA is pursuing a block approach that incrementally builds to a system-level discrimination architecture that incorporates a network of sensors. The idea is to observe the target suite throughout its trajectory using an array of ground- and space-based sensors and to combine individual observations to formulate a “discrimination solution.” MDA is also investing resources to study the discrimination problem and, for example, is moving forward with flight test events focused on radar discrimination and large analysis programs. MDA has plans to conduct four Radar Certification Flights through fiscal year 2006. These are non-intercept flight tests for comprehensively characterizing the discrimination capability of the X-band radar and to support the development of upgraded early warning radars. Furthermore, these tests are expected to have a “pop quiz” component to examine radar discrimination. MDA has not yet scheduled “pop quiz” testing in relation to kill vehicle’s capability to perform target discrimination. MDA initiated and continues to fund analysis programs for investigating promising technical concepts to improve its capabilities against enemy countermeasures. For example, one such program, Project Hercules, is focused on the development and testing of discrimination algorithms and draws on academic, government, and industry expertise. Details can be found in the classified version of this report. Despite MDA’s increased scope of work in the discrimination area, as described above, the agency’s specific actions pertaining to integrated flight testing only partially address the August 2000 DOT&E recommendations. No intercept flight tests of the current test plan, which goes through IFT-26 (fiscal year 2007), are planned to address the challenge posed by an enemy’s use of unsophisticated but more challenging countermeasures. Rather, agency officials told us that the technical challenges posed by such countermeasures are being analyzed and may be inserted into the flight test program at a later time. The remaining five recommendations from the August 2000 DOT&E report pertain to concerns on programmatic issues, namely, adequacy of spares in flight testing, and performance requirements. MDA has not provided for adequate target or interceptor backups (hot spares) during flight tests. MDA officials stated that additional target and interceptor spares can be costly, but they are considering the issue. Even if implemented, MDA’s actions that address the recommendations on spares would not have a significant impact on the actual conduct of flight tests but would reduce schedule risk. When DOT&E made its recommendations in August 2000, the GMD element was being developed according to operational requirements. However, MDA is now following a fundamentally new acquisition strategy—one that is capability-based with no formal operational requirements developed by the services. Hence, MDA has no plans to reexamine the reliability requirements. Nonetheless, the current test program is addressing certain performance issues raised by DOT&E. For example, the GMD program office is tracking the prime contractor’s progress in meeting target discrimination goals. Under the new acquisition strategy outlined by the Secretary of Defense in his January 2002 memorandum, the ballistic missile defense program has been refocused into a broad-based research and development effort managed by MDA. The new program aims at developing layered defenses to intercept missiles in all phases of flight and, if directed, to use developmental prototypes and test assets to provide an early operational capability. And, as stated above, system development is not subject to the formal operational requirements developed by the Services. On December 16, 2002, the President directed DOD to begin fielding the first increment of the multi-element ballistic missile defense system in 2004. The Secretary of Defense stated the next day that “…it would be a very preliminary, modest capability.” The initial defensive capability for defending the United States against long-range missiles would be based on the GMD element of the Test Bed and augmented with more interceptors and external sensors, as follows: GMD Element as part of the BMDS. The principal components of the GMD element for defensive operations include a total of up to 10 interceptors sited at Fort Greely (6) and Vandenberg Air Force Base (4); GMD fire control nodes at Fort Greely and Schriever Air Force Base for battle management and execution; an upgraded Cobra Dane radar at Eareckson Air Station; and an upgraded early warning radar at Beale Air Force Base. External Sensors. Existing sensors external to the GMD element would also be available for defensive operations, including Defense Support Program satellites for early warning of missile launches, and three forward-deployed Aegis AN/SPY-1 radars on existing Navy destroyers for early midcourse tracking. The above assets comprise the initial configuration, which is scheduled for fielding by the end of September 2004. The agency’s near-term intention is to grow this capability by adding 10 interceptors at Fort Greely, a sea-based X-band radar, and an upgraded early warning radar at Fylingdales, England, by the end of 2005. MDA is moving forward, as directed by the President, with the fielding of an initial defensive capability by the end of the 2004 fiscal year to protect the United States from long-range missiles. MDA cannot at this time formulate a credible assessment of system-level effectiveness, because critical components like the Cobra Dane radar and interceptor boosters have yet to be developed and tested in a flight test environment, and no initial defensive capability is available for a system-level demonstration and evaluation. Cobra Dane Radar. The capabilities of the Cobra Dane radar will not be demonstrated in flight testing before September 2004. It is an L-band phased array radar located at Eareckson Air Station in Shemya, Alaska, at the western end of the Aleutian chain. Its close proximity to Russia allows it to perform its primary mission of collecting data on intercontinental ballistic missile and submarine launched ballistic missile test launches to the Kamchatka impact area. Since the Cobra Dane radar is currently being used in a surveillance mode, it does not require real time communications and data processing capabilities. After planned software and hardware upgrades to be completed in fiscal year 2004, it will have the additional mission to perform real-time acquisition and tracking, functions critical for ballistic missile defense. Interceptor Boosters. In July 1998, the GMD prime contractor (Boeing) began developing a new three-stage booster for its ground-based interceptor from commercial off-the-shelf components. The contractor encountered difficulty, and by the time the booster was flight tested in August 2001, it was already about 18 months behind schedule. Subsequently, to reduce risk, MDA altered its strategy for acquiring a new booster for the GMD interceptor. Development of the original booster was transferred to Lockheed Martin, and MDA authorized the GMD prime contractor to develop a second source for the booster by awarding a subcontract to Orbital Sciences Corporation. Both contractors are developing boosters for use in the September 2004 initial defensive capability. The first demonstration of an operational booster in an attempted intercept is scheduled for the third quarter of fiscal year 2004. System-Level Testing. A system-level demonstration of the initial defensive capability will not be conducted prior to September 2004. To date, integrated flight tests have demonstrated basic functionality of a representative ballistic missile defense system using surrogate and prototype components, and have shown success in intercepting a mock reentry vehicle in a developmental test environment. The first flight test consisting of components closest to the configuration of the September 2004 initial defensive capability is IFT-14, which is currently scheduled for the third quarter of fiscal year 2004. The test will incorporate Block 2004 prototypes of the interceptor booster and kill vehicle of the configuration intended for operational use beginning in September 2004. In addition, the first tactical build of the battle management software will be utilized in IFT-14. However, interceptors will not be launched out of Fort Greeley in IFT-14 and IFT-15 (the remaining integrated flight tests to be conducted before September 2004). In commenting on a draft of this report, MDA stated that while it cannot address all technical concerns for the initial fielding, it has added the following activities: Enhanced producibility, quality, and reliability efforts. Increased operational focus in the developmental program, e.g., military utility and effectiveness assessments. Expanded command and control, battle management, and operator integration in BMDS testing to support fielding of initial defensive capabilities in 2004. MDA also stated that the results of these program decisions are intended to provide a comprehensive program that demonstrates operational effectiveness and military utility against credible threats in an operational environment. System effectiveness is characterized in terms of the following four performance metrics: (1) defended area, (2) launch area denied, (3) probability of engagement success, and (4) raid size breakpoint. Defended area is the portion of the United States protected against long-range missile attacks and, as a metric, is usually reported relative to a single threat country or region; launch area denied simply refers to the collection of threat countries from which the United States is protected. The probability of engagement success is the probability that all attacking warheads are destroyed, derived from the probabilities associated with missile defense functions like detection, discrimination, and hit-to-kill. Finally, raid size breakpoint is the maximum number of warheads the system can realistically defeat in a single engagement. This metric is highly dependent on interceptor inventory. A detailed discussion of GMD’s expected effectiveness is presented in the classified June 2003 version of this report. A notable limitation of the effectiveness of the September 2004 initial defensive capability—and possibly the December 2005 capability— pertains to the inability of system radars to perform target discrimination. Neither the Cobra Dane radar nor the upgraded early warning radar at Beale is capable of performing rigorous discrimination, a function achievable only by the X-band radar. Rather, both radars will utilize common “target classification” software that enables them to classify objects as threatening or non-threatening. For example, debris would be classified as non-threatening, but objects like deployment buses and decoy replicas would be classified as threatening. Accordingly, the system would have to rely solely on the kill vehicle for a final target selection. The assessment of kill vehicle discrimination is, therefore, critical for understanding the capability of the deployed system, a point made in the DOT&E report. Appropriately, the GMD prime contractor tracks the discrimination capability of the kill vehicle as a technical performance measure. The prime contractor’s December 2002 assessment rated the kill vehicle discrimination performance as meeting expectations based on analysis and simulation. Lastly, measures of system suitability like availability and vulnerability— which complement system effectiveness—are important for characterizing the initial defensive capability as a whole. MDA is aiming for full-time operations but faces risks in achieving this goal. Details on system availability and vulnerability are provided in our June 2003 classified report. Since DOT&E issued its August 2000 report, DOD has altered its approach to the acquisition of missile defense systems to one that follows a “capability-based” strategy. The new approach allows MDA to evolve and demonstrate additional improvements in missile defense systems before committing to procurement and operations. MDA’s test program for all missile defense elements, such as GMD, was also reoriented to focus on the development and use of the BMDS Test Bed. Over time, the Test Bed should facilitate testing that address many of DOT&E’s recommendations, especially those pertaining to flight test realism, complexity, and artificialities. However, most of the agency’s actions with respect to DOT&E’s ground testing recommendations, namely, those pertaining to comprehensive hardware-in-the-loop testing of the kill vehicle have been deferred. In addition, MDA is proceeding slowly with the flight testing against certain countermeasures, which DOT&E noted are simple for an enemy to implement. These unresolved concerns in the test program warrant attention by DOT&E and the test community in general. Given the importance of ground testing and discrimination testing for understanding system effectiveness, decision makers in the Congress and Office of the Secretary of Defense would benefit from having information on the agency’s progress in these matters as they consider investments in developing the ballistic missile defense system. As an independent office that reviews DOD’s weapon system testing and the office that made the recommendations discussed in this report, DOT&E would be in a good position to provide such information to decision makers. As a means of providing decision makers with critical information when investments in missile defense are considered, we recommend that DOT&E report periodically, as it deems appropriate, on the status of MDA’s actions taken or planned in response to the August 2000 recommendations. In its review, DOT&E should include information and recommendations, as warranted, on MDA’s progress and planning (1) to improve hardware-in-the-loop testing of the kill vehicle, (2) to test kill vehicle components in nuclear environments, and (3) to test the GMD element’s capability to defeat likely and simple near-term countermeasures during integrated flight tests. In the report, DOT&E can advise the Director, MDA, on how the test program could be modified to accommodate DOT&E’s long-standing concerns. In commenting on a classified draft version of this report, DOD agreed with our recommendations. (See app. II for a reprinted version of DOD’s comments.) However, DOD conveyed the following concerns: The GMD test program as described in this report is no longer current. It is difficult to reconcile the dated terms of reference of the original DOT&E recommendations with the current program strategy and structure. The inherent robustness of the envisioned layered BMD System relative to midcourse countermeasures is overlooked. While the GMD test program has, indeed, been in a constant state of flux, thus complicating our analysis, our report presents the latest, approved test program information provided to us by MDA. Despite alterations to the acquisition strategy and structure of the ballistic missile defense system and its constituent elements, like GMD, we believe most of the DOT&E recommendations are still relevant because the technical challenges and uncertainty with developing, testing, and fielding effective defensive capabilities, as identified in the August 2000 DOT&E report, remain significant. For example, the DOT&E report issued in February 2003, FY02 Assessment of the Missile Defense Agency Ballistic Missile Defense System, continued to highlight the need for a comprehensive hardware-in-the-loop capability to test the kill vehicle under the stress of real physical phenomena and to test the kill vehicle’s discrimination capability. We do recognize that a number of recommendations for which no actions are currently planned, such as those recommendations dealing with flight testing during Initial Operational Test and Evaluation, are a direct result of MDA’s new acquisition approach. The department is correct in stating that we did not address the capability of the envisioned ballistic missile defense system as a whole in defeating midcourse countermeasures. However, we do note that a system-level discrimination architecture would use a network of ground- and space- based sensors to formulate a “discrimination solution.” Also, given the early stages of development of the envisioned layered system, including boost-phase intercept, the value of this strategy has not been demonstrated. Although the department agreed that DOT&E should report periodically on the status of MDA’s actions to address the August 2000 DOT&E recommendations, it did not believe additional reporting is required to track their resolution. The department pointed out that our recommendation grants DOT&E discretionary reporting authority where mandatory reporting already exists. We believe, however, the recommendation is worded appropriately. Existing statutory reporting requirements for DOT&E on the adequacy and sufficiency of the missile defense test program do not require that the August 2000 DOT&E recommendations be specifically addressed. We worded the recommendation to highlight the areas we believe DOT&E should address—hardware-in-the-loop testing of the kill vehicle, testing of kill vehicle components in nuclear environments, and testing the GMD element’s capability to defeat likely and simple near-term countermeasures—and to give DOT&E the discretion to address our recommendation in the manner it deems appropriate. To present its assessment, DOT&E could use existing or new reporting vehicles. Finally, department comments pertaining to MDA actions on ground testing are addressed in the body of this report. As arranged with your staff, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we plan to provide copies of this report to interested congressional committees; the Secretary of Defense; and the Director, Missile Defense Agency. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-4841. The major contributors to this report were Randy Zounes, Stan Lipscomb, Tana Davis, and Bill Graveline. In examining the actions taken or planned by the MDA in response to the DOT&E recommendations, we analyzed pertinent test documents, studies, and reports. These include the (1) GMD Element Developmental Master Test Plan (March 2003); (2) GMD System Element Reviews; (3) MDA “immersion day” briefing; (4) MDA written responses to our questions about MDA actions in response to the DOT&E recommendations; (5) Secretary of Defense January 2002 Memorandum on Missile Defense Program Direction; and (6) Independent Review Team (Welch panel) Reports. In addition, MDA officials briefed us on GMD’s program status and efforts to defeat enemy countermeasures. We also reviewed available documentation on the schedule and purpose of the Test Bed. These documents included studies on the enhanced test program restructure, fiscal year 2003 budget justifications, and the request for the contract proposal for the Block 2004 Test Bed. To assess the effectiveness and limitations of the initial defensive capability, we relied on the following MDA documentation: (1) GMD System Element Review (January 2003); (2) BMDS Block 2004 Statement of Goals; and (3) National Security Presidential Directive (NSPD-23), the President’s directive to begin fielding an initial capability. We also identified uncertainties—based on the level of testing achieved to date—of the potential capabilities of individual elements of the initial defensive capability, such as the radars and interceptor boosters, as well as radar capabilities to perform the discrimination function. We conducted our work primarily at the MDA, located in Arlington, Virginia, and the GMD Joint Program Office, located in Arlington, Virginia, and Huntsville, Alabama. We conducted our audit work for the June 2003 classified report, upon which this unclassified version is based, from October 2001 to March 2003 in accordance with generally accepted government auditing standards. However, reported dates of GMD flight test events given in this unclassified version have been updated with the latest (December 2003) GMD test schedules.
In August 2000, the Defense Department's (DOD) Director, Operational Test and Evaluation (DOT&E), made 50 recommendations on a test program for a system to defeat long-range ballistic missile threats against the United States. DOD's Missile Defense Agency (MDA) plans to begin fielding the system by September 2004. GAO examined (1) how MDA addressed DOT&E's recommendations and (2) what is known about the effectiveness of the system to be fielded by September 2004. GAO issued a classified report on this subject in June 2003. This unclassified, updated version reflects changes in MDA's test schedule. MDA is addressing most of DOT&E's recommendations on flight testing but will not complete many actions before September 2004. For example, DOT&E recommended removing flight test range limitations by adding more intercept regions and launch locations to add greater realism to its tests. MDA is expanding the test range infrastructure to add five intercept regions and target and interceptor launches out of new locations. By September 2004, one of the regions will be tested. MDA is generally not addressing DOT&E's proposals on ground testing. For example, although MDA had begun upgrading a ground facility to provide a realistic testing environment for the interceptor, MDA deferred testing at the facility to fund other priorities. Finally, MDA is addressing DOT&E's recommendations on discrimination--the system's ability to find an enemy warhead among decoys--by funding analysis programs. Predictions of how well the system will defeat long-range ballistic missiles are based on limited data. No component of the system to be fielded by September 2004 has been flight-tested in its deployed configuration. Significant uncertainties surround the capability to be fielded by September: MDA will not demonstrate in flight tests a critical radar called Cobra Dane before that date or conduct a system-level demonstration, and has yet to test its three-stage boosters as part of a planned intercept.
VA’s disability compensation claims process starts when a veteran submits a claim to VA (see fig. 1). A claim folder is created at 1 of VA’s 57 regional offices, and a Veterans Service Representative (VSR) then reviews the claim and helps the veteran gather the relevant evidence needed to evaluate the claim. Such evidence includes the veteran’s military service records, medical examinations, and treatment records from Veterans Health Administration (VHA) medical facilities and private medical service providers. Also, if necessary to provide support to substantiate the claim, VA will provide a medical examination for the veteran. Once VBA has gathered the supporting evidence, a Rating Veterans Service Representative (RVSR)—who typically has more experience at VBA than a VSR—evaluates the claim and determines whether the veteran is eligible for benefits. If so, the RVSR assigns a percentage rating. A veteran may subsequently reopen a claim to request an increase in disability compensation from VA if, for example, a service-connected disability worsens or a new disability arises. If the veteran disagrees with VA’s decision regarding a claim, he or she can submit a written Notice of Disagreement to the regional office handling the claim. In response to such a notice, VBA reviews the case and provides the veteran with a written explanation of the decision— known as a Statement of the Case—if VBA does not grant all appealed issues. If the veteran further disagrees with the decision, he or she may appeal to the Board of Veterans’ Appeals (the Board) which conducts a hearing at the veteran’s request, then grants benefits, denies the appeal, or returns the case to VBA to obtain additional evidence necessary to decide the claim. If the veteran is dissatisfied with the Board’s decision, he or she may appeal, in succession, to the U.S. Court of Appeals for Veterans Claims, to the Court of Appeals for the Federal Circuit, and finally to the Supreme Court of the United States. In recent years, VA compensation claims processing timeframes have increased. Specifically, the average days pending increased from 116 days in fiscal year 2009 to 254 days in fiscal year 2012. During the same time period, the average days to complete increased from 161 to 260 days. VBA also collects data on the timeliness of the different phases of the claims process, which is used to identify trends and bottlenecks throughout the process. In fiscal year 2011, each phase took longer on average than its stated agency timeliness target (see fig. 2). In fiscal year 2011, the national averages for the initiating development, gathering evidence, and rating decision phases were 44, 72, and 57 days, respectively, over their timeliness targets. In recent years, VA’s claims processing production has not kept pace with the substantial increase in incoming claims. In fiscal year 2011, VA completed over 1 million compensation rating claims, a 6 percent increase from fiscal year 2009. However, the number of VA compensation rating claims received had grown 29 percent—from 1,013,712 in fiscal year 2009 to 1,311,091 in fiscal year 2011 (see fig. 3). As a result, the number of backlogged claims—defined as those claims awaiting a decision for more than 125 days—has increased substantially since 2009. As of August 2012, VA had 856,092 pending compensation rating claims, of which 568,043 (66 percent) were considered backlogged. One factor that contributed to the substantial increase in claims received was the commencement in October 2010 of VBA’s adjudication of 260,000 previously denied and new claims when a presumptive service connection was established for three additional Agent Orange diseases. VBA gave these claims a high priority and assigned experienced claims staff to process and track them. VBA officials said that 37 percent of its claims processing resources nationally were devoted to adjudicating Agent Orange claims from October 2010 to March 2012. VBA officials in one regional office we spoke to said that all claims processing staff were assigned solely to developing and rating Agent Orange claims for 4 months in 2011, and that no other new and pending claims in the regional office’s inventory were processed during that time. Also during this time period, special VBA teams—known as brokering centers—which previously accepted claims and appeals from regional offices experiencing processing delays, were devoted exclusively to processing Agent Orange claims. According to VBA, other factors that contributed to the growing number of claims include an increase in the number of veterans from the military downsizing after 10 years of conflict in Iraq and Afghanistan, improved outreach activities and transition services to servicemembers and veterans, and difficult financial conditions for veterans during the economic downturn. Similar to claims processing, VA regional office appeals processing has not kept pace with incoming appeals received. For example, in fiscal year 2012, VA received 121,786 Notices of Disagreement. However, the number of Statements of the Case that were processed by VBA was only 76,685. As a result, the number of Notice of Disagreements awaiting a decision grew 76 percent from fiscal years 2009 to 2012 and, during that period, the time it took VA to process a Statement of the Case increased 57 percent—from 293 days to 460 days on average. According to VBA officials, staff shortages represent a primary reason that appeals timeliness at VA regional offices has worsened. For example, VBA officials at each of the five regional offices we met with stated that over the last several years appeals staff have also had to train and mentor new staff, conduct quality reviews, as well as develop and rate disability claims to varying degrees. A 2012 VA OIG report noted that VA regional office managers did not assign enough staff to process appeals, diverted staff from processing appeals, and did not ensure that appeals staff acted on appeals promptly because, in part, they were assigned responsibilities to process initial claims, which were given higher priority. According to VA officials, federal laws and court decisions over the past decade have expanded veterans’ entitlement to benefits but have also added requirements that can negatively affect claims processing times. For example, the Veterans Claims Assistance Act of 2000 (VCAA) added a requirement that VA assist a veteran who files a claim in obtaining evidence to substantiate the claim before making a decision. This requirement includes helping veterans obtain all relevant federal and non-federal records. VA is required to continue trying to obtain federal records, such as VA medical records, military service records, and Social Security records, until they are either obtained or the associated federal entity indicates the records do not exist. VA may continue to process the claim and provide partial benefits to the veteran, but the claim cannot be completed until all relevant federal evidence is obtained. Because VA must consider all evidence submitted throughout the claims and appeals process, if a veteran submits additional evidence or adds a condition to a claim late in the process it can require rework and may subsequently delay a decision, according to VBA central office officials. VBA officials at regional offices we spoke to said that submitting additional evidence may add months to the claims process. New evidence must first be reviewed to determine what additional action, if any, is required. Next, another notification letter must be sent to the veteran detailing the new evidence necessary to redevelop the claim. VA may also have to obtain additional records or order another medical examination before the claim can be rated and a decision made. Furthermore, while VA may continue to process the claim and provide partial benefits to the veteran, a claim is not considered “complete” until a decision is made on all submitted conditions. Moreover, a veteran has up to 1 year, from the notification of VA’s decision, to submit additional evidence in support of the claim before the decision is considered final. Similarly, for an appeal, veterans may submit additional evidence at any time during the process. If the veteran submits additional evidence late in the process after VA completes a Statement of the Case, VA must review the new evidence, reconsider the appeal, and provide another written explanation of its decision—known as a Supplemental Statement of the Case. Congress recently passed a law allowing VA to waive review of additional evidence submitted after the veteran has filed a substantive appeal and instead have the new evidence reviewed by the Board to expedite VA’s process of certifying appeals to the Board. According to VBA officials, delays in obtaining military service and medical treatment records, particularly for National Guard and Reserve members, have significantly lengthened the evidence gathering phase. According to VBA officials, 43 percent of Global War on Terror veterans are National Guard and Reserve members. Department of Defense (DOD) guidance requires military staff to respond to VA requests for National Guard and Reserve records in support of VA disability compensation claims. However, VBA area directors and officials at all five regional offices we met with acknowledged that delays in obtaining these records are system-wide. Military records of National Guard or Reserve members can often be difficult to obtain, in particular, because these servicemembers typically have multiple, non-consecutive deployments with different units and their records may not always be held with their reserve units and may exist in multiple places. Moreover, according to VBA officials, National Guard and Reserve members may be treated by private providers between tours of active duty and VA may have to contact multiple military personnel and private medical providers to obtain all relevant records, potentially causing delays in the evidence gathering process. Difficulties obtaining SSA medical records can also lengthen the evidence gathering phase. Although VBA regional office staff have direct access to SSA benefits payment histories, they do not have similar access to medical records held by SSA. If a veteran submits a disability claim and reports receiving SSA disability benefits, VA is required to help the veteran obtain relevant federal records, including certain SSA medical records, to process the claim. VBA’s policy manual instructs claims staff to fax a request for medical information to SSA and if no reply is received, to wait 60 working days before sending a follow-up request. If a response is not received after 30 days, claims staff are instructed to send an email request to an SSA liaison. VBA officials at four of the five regional offices we reviewed told us that when following this protocol, they have had difficulty obtaining SSA medical records in a timely fashion. Moreover, they reported having no contact information for SSA, beyond the fax number, to help process their requests. In complying with VA’s duty to assist requirement, VBA staff told us they continue trying to retrieve SSA records by sending follow-up fax requests until they receive the records or receive a response that the records do not exist. VBA area directors said some regional offices have established relationships with local SSA offices and have better results, but obtaining necessary SSA information has been an ongoing issue nationally. For example, officials at one regional office said a response from SSA regarding a medical records request can sometimes take more than a year to receive. VBA’s work processes, stemming mainly from its reliance on a paper- based claims system, can lead to misplaced or lost documents, and contribute to lengthy processing times. VBA officials at three of the five regional offices we met with noted that errors and delays in handling, reviewing, and routing incoming mail to the correct claim folder can delay the processing of a claim or cause rework. For example, VBA officials at one regional office said that claims may be stalled in the evidence gathering phase if mail that contains outstanding evidence is misplaced or lost. In addition, claims staff may rate a claim without knowledge of the additional evidence submitted and then, once the mail is routed to the claim folder, have to rerate the claim in light of the new evidence received. Furthermore, VBA officials told us that processing can also be delayed if mail staff are slow to record new claims or appeals into IT systems. As of August 2012, VBA took 43 days on average to record Notices of Disagreement in the appeals system—36 days longer than VBA’s national target. VBA area directors said that mail processing timeliness varies by regional office and that the more efficient offices in general do a better job of associating mail with the correct claims folder. VBA officials also said that moving physical claims folders among regional offices and medical providers contributes to lengthy processing times. According to a 2011 VA OIG report, processing delays occurred following medical examinations because staff could not match claims- related mail with the appropriate claim folders until the folders were returned from the VA Medical Center. In addition, processing halts while a claim folder is sent to another regional office or brokering center. Based on a review of VA documents and interviews with VBA officials, we identified 15 efforts with a stated goal of improving claims and appeals timeliness. We selected 9 for further review—primarily based on interviews with VBA officials and a review of recent VA testimonies—that have the purpose of reducing disability claims and appeals processing times. VBA has several ongoing efforts to leverage internal and external resources to better manage its workload (see fig. 4). For example, VBA began the Veterans Benefits Management Assistance Program (VBMAP) in late fiscal year 2011 to obtain contractor support for evidence gathering for approximately 279,000 disability claims. Under VBMAP, the contractor gathers evidence in support of a claim and then sends the claim file back to the originating regional office, which reviews the claim for completeness and quality and then assigns a rating. Contractor staff are required to complete their work within 135 days of receiving the file and provide VBA with status reports that include several measures of timeliness, including the time it took to receive medical evidence from providers and to return a claim to VBA for rating. As of June 2012, VBA regional offices we spoke with were awaiting the first batch of claims that were to be sent to the contractors. To help speed up the claims and appeals processes, VBA also has several efforts that modify program requirements or change procedures (see fig. 4). The Fully Developed Claims (FDC) program began as a pilot in December 2008 and was implemented nationwide in June 2010. Normally, once a veteran submits a claim, VBA will review the claim and then send the veteran a letter detailing additional evidence required to support it. The FDC program eliminates this step because the required notification is provided to the veteran directly on the FDC form, thus reducing the time VBA would normally spend gathering evidence for the veteran. In exchange for expedited processing, veterans participating in the FDC program send VBA any relevant private medical evidence with the claim and certify that they have no additional evidence to provide. According to VBA officials, in the first 2 years of the program, VBA processed 33,001 FDC claims, taking an average of about 98 days to complete—8 days longer than the goal of 90 days for these claims. However, as of July 2012, veteran participation in the FDC program had been low—only 4 percent of all compensation rating claims submitted in 2012. The Claims Organizational Model initiative is aimed at streamlining the overall claims process (see fig. 4). For this initiative, VBA created specialized teams that process claims based on their complexity. Specifically, an “express team” processes claims with a limited number of conditions or issues; a “special operations” team processes highly complex claims, such as former prisoners of war or traumatic brain injury cases; and a core team works all other claims. Each of these teams is staffed with both development and ratings staff, which VBA believes will lead to better coordination and knowledge-sharing. Under this model, VBA also redesigned the procedures that mailrooms use to sort and process incoming claims. As of December 2012, VBA had implemented the initiative at 51 regional offices. According to VA, the remaining regional offices will be transitioned to the Claims Organizational Model by the second quarter of fiscal year 2013. In 2010, VBA began to develop the Veterans Benefits Management System (VBMS), a paperless claims processing system that is intended to help streamline the claims process and reduce processing times (see fig. 4). According to VBA officials, VBMS is intended to convert existing paper-based claims folders into electronic claims folders and allow VBA employees electronic access to claims and evidence. Once completed, VBMS is also expected to allow veterans, physicians, and other external parties to submit claims and supporting evidence electronically. In August 2012, VA officials told us that VBMS was still not ready for national deployment, citing delays in scanning claims folders into VBMS as well as other software performance issues. A recent VA OIG report also concluded that VBMS has experienced some performance issues and the scanning and digitization of claims lacked a detailed plan. However, according to VA, as of December 2012, 18 regional offices were piloting VBMS and all regional offices are expected to implement VBMS by the end of calendar year 2013. We have noted that VA’s ongoing efforts should be driven by a robust, comprehensive plan; however when we reviewed VBA’s plan documents, we found that they fell short of established criteria for sound planning. Specifically, VBA provided us with several documents, including a PowerPoint presentation and a matrix that provided a high-level overview of over 40 initiatives, but, at the time of our review, could not provide us with a robust plan that tied together the group of initiatives, their inter- relationships, and subsequent impact on claims and appeals processing times. Although there is no established set of requirements for all plans, components of sound planning are important because they define what organizations seek to accomplish, identify specific activities to obtain desired results, and provide tools to help ensure accountability and mitigate risks. In our December 2012 report, we recommended that VBA seek improvements for partnering with relevant federal and state military officials to reduce the time it takes to gather military service records from National Guard and Reserve sources. We also recommended that VBA develop improvements for partnering with Social Security Administration officials to reduce the time it takes to gather medical records. Lastly, we recommended that VBA develop a robust backlog reduction plan for its initiatives that, among other best practice elements, identifies implementation risks and strategies to address them and performance goals that incorporate the impact of individual initiatives on processing timeliness. VA generally agreed with our conclusions and concurred with our recommendations, and summarized efforts that are planned or underway to address them. For example, VA stated it has recently initiated several interagency efforts to the timeliness of record exchanges between VBA and DOD. In addition, VA stated that it is working with SSA to pilot a web- based tool to provide VA staff a secure, direct communication with SSA and to automate VA’s requests for SSA medical records. VA also agreed with our recommendation to develop a robust backlog plan for VBA’s initiatives and, subsequent to our report, published the Department of Veterans Affairs (VA) Strategic Plan to Eliminate the Compensation Claims Backlog. This plan includes implementation risks and performance metrics used to track the cumulative effect of its initiatives on processing times but still lacks individual performance goals and metrics for all initiatives. In conclusion, for years, VA’s disability claims and appeals processes have received considerable attention as VA has struggled to process disability compensation claims in a timely fashion. Despite this attention, VA continues to wrestle with several ongoing challenges—some of which VA has little or no control over—that contribute to lengthy processing timeframes. For instance, the number and complexity of VA claims received has increased. VBA is currently taking steps to improve the timeliness of claims and appeals processing; however, prospects for improvement remain uncertain because timely processing remains a daunting challenge. Chairman Sanders, Ranking Member Burr, and Members of the Committee, this concludes my prepared statement. I am pleased to answer any questions you may have. For further information about this testimony, please contact Daniel Bertoni at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this testimony include Lucas Alvarez, James Bennett, Michelle Bracy, Brett Fallavollita, Dan Meyer, James Rebbe, Ryan Siegel, Walter Vance, and Greg Whitney. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the Department of Veterans Affairs' (VA) disability benefits program, which provides monetary support to veterans with disabling conditions that were incurred or aggravated during military service. In fiscal year 2013, VA estimates it will provide $59.6 billion in compensation benefits to 3.98 million veterans and their families. For years, the disability claims process has been the subject of concern and attention by VA, Congress, and Veterans Service Organizations (VSO), due in part to long waits for decisions and the large number of pending claims. For example, the average length of time to complete a claim increased from 161 days in fiscal year 2009 to 260 days in fiscal year 2012. Moreover, VA's backlog of claims--defined as claims awaiting a decision for over 125 days--has more than tripled since September 2009. In August 2012, approximately two-thirds of the 568,043 compensation rating claims--which include pension and disability rating claims--were backlogged. In addition, timeliness of appeals processing at VA regional offices has also slowed by 56 percent over the last several years. This testimony is based on a GAO report released on December 21, 2012, titled Veterans' Disability Benefits: Timely Processing Remains a Daunting Challenge, and, also include information updated to reflect the status of improvement efforts. This testimony focuses on (1) factors that contribute to lengthy disability claims and appeals processing times at VA regional offices and (2) status of the Veterans Benefits Administration's (VBA) recent improvement efforts. GAO found a number of factors--both external and internal to VBA--have contributed to the increase in processing times and subsequent growth in the backlog of veterans' disability compensation claims. For example, the number of claims received by VBA has increased as the population of new veterans has swelled in recent years. Moreover, due to new regulations that established eligibility for benefits for new diseases associated with Agent Orange exposure, VBA adjudicated 260,000 previously denied and new claims for related impairments. Beyond these external factors, issues with the design and implementation of the program have also contributed to timeliness challenges. For example, the law requires VA to assist veterans in obtaining records that support their claim. However, VBA officials said that delays in obtaining military records--particularly for members of the National Guard and Reserve--and Social Security Administration (SSA) medical records impact VA's duty to assist, possibly delaying a decision on a veteran's disability claim. Further, VBA's paper-based claims processing system involves multiple hand-offs, which can lead to misplaced and lost documents and cause unnecessary delays. Concerning timeliness of appeals, VBA regional offices have in recent years shifted resources away from appeals and towards claims, which has led to lengthy appeals timeframes. VBA has a number of initiatives underway to improve the timeliness of claims and appeals processing. Such efforts include leveraging VBA staff and contractors to manage workload, modifying and streamlining procedures, improving records acquisition, and redesigning the claims and appeals processes. According to VBA officials, these efforts will help VA process all veterans' claims within VA's stated target goal of 125 days by 2015. However, the extent to which VA is positioned to meet its ambitious processing timeliness goal remains uncertain. VBA provided us with several planning documents, but, at the time of our review, could not provide us with a plan that met established criteria for sound planning, such as articulating performance measures for each initiative, including their intended impact on the claims backlog. GAO has recommended that VBA (1) partner with military officials to reduce timeframes to gather records from National Guard and Reserve sources, (2) work with SSA to reduce timeframes to gather SSA medical records, and (3) develop a robust plan for its improvement initiatives that identifies performance goals that include the impact of individual initiatives on processing timeliness. VA generally agreed with our conclusions and concurred with our recommendations, and identified efforts that it has planned or underway to address them.
FDA is responsible for ensuring that medical products—including medical devices—sold in the United States provide reasonable assurance of safety and effectiveness and do not pose a threat to public health. FDA’s oversight responsibilities for medical devices begin before a product is brought to market and continue after a product is available for sale. Its premarket responsibilities include reviewing thousand of submissions for new devices filed each year to decide whether they should be allowed to be marketed in the United States. Its postmarket responsibilities include monitoring the safety of thousands of medical devices already on the market and identifying, analyzing, and acting on potential risks the devices may pose to the public. This monitoring includes overseeing recalls of medical devices. FDA classifies each device type into one of three classes—class I, II, or III—based on the level of risk it poses and the controls necessary to provide reasonable assurance of its safety and effectiveness. According to FDA, the risk the type of device poses to the user is a major factor in the class it is assigned: class I includes devices with the lowest risk, and class III includes devices with the highest risk. Examples of types of devices in each class include the following: class I: tongue depressors, elastic bandages, reading glasses, and forceps; class II: electrocardiographs, powered bone drills, and mercury class III: pacemakers and replacement heart valves. In general, unless exempt under FDA regulations, medical devices are subject to one of two types of FDA premarket review before they may be legally marketed in the United States. These reviews are as follows. Premarket approval (PMA): The manufacturer must submit evidence, typically including clinical data, providing reasonable assurance that the new device is safe and effective. The PMA process is the most stringent type of premarket review. A successful submission results in FDA’s approval to market the device. Premarket notification (510(k)): Premarket notification is commonly called “510(k)” in reference to section 510(k) of the Federal Food, Drug, and Cosmetic Act where the notification requirement is listed. Under this review, the manufacturer must demonstrate to FDA that the new device is substantially equivalent to a device already legally on the market. For most 510(k) submissions, clinical data are not required and substantial equivalence will normally be determined based on comparative descriptions of a device’s intended use and technological characteristics, and may include performance data. A successful submission results in FDA’s clearance to market the device. Most class I device types and some class II devices are exempt from FDA’s premarket review. In general, those that are not exempt, but which are substantially equivalent to a legally marked class I or class II device, are subject to premarket review through the 510(k) process. Class III device types are generally required to obtain FDA approval through the more stringent PMA process. FDA defines a recall as a firm’s removal or correction of a marketed product that FDA (1) considers to be in violation of the laws it administers, and (2) against which the agency would initiate legal action. Nearly all medical device recalls are voluntarily initiated by a firm, usually the manufacturer of the device. The recall process generally consists of a series of steps that we have categorized into broad phases—initiating and classifying the recall, conducting and overseeing the recall, and completing and terminating the recall. While the recalling firm has primary responsibility for ensuring that the recalled devices are corrected or removed, FDA and other stakeholders each have responsibilities which they are supposed to undertake in order to effectively implement the various phases of a recall. FDA’s role is generally to oversee a firm’s management of recalls. It conducts its responsibilities as part of its postmarket surveillance. FDA staff from ORA—which is the lead office for all FDA field activities, including the agency’s district offices—and CDRH are involved in overseeing recalls. Other stakeholders, including the firm’s customers—such as distributors—and device users—such as hospitals or patients—are expected to correct or remove the recalled device according to the recalling firm’s instructions. A given recall may require the cooperation of thousands of different stakeholders depending on how many entities received, purchased, or used the device. The following sections generally describe the voluntary recall process that FDA, as well as recalling firms, their customers, and device users, are expected to follow according to FDA’s regulations, procedures, and guidance. During this phase of a device recall, a firm initiates a recall, while FDA classifies the recall based on health risks presented by use of the device. As part of this phase, a firm develops a strategy for implementing the recall, and FDA reviews and suggests changes to the strategy. In most cases, a firm arrives at the decision to initiate a recall after discovering a problem with a device, or a series of similar devices. The firm may then contact an FDA district office or immediately begin implementing a recall. A firm may initiate a recall—that is notify stakeholders such as distributors and device users about the recall—prior to contacting the FDA district office. However, according to federal regulations, a firm must provide FDA with a report of correction or removal within 10 working days of initiating a recall of a product that involves or may involve a risk to health. As part of its report, the firm is to provide FDA with key information such as the reason the device is being recalled, the brand name and model of the device, the lot or serial numbers of the device, the number of devices subject to correction or removal, and contact information for its customers and device users who received, used, or purchased the device. According to FDA’s guidance, the recalling firm is also asked to develop a recall strategy that takes into account its assessment of the health hazard associated with the device. The strategy should contain details on the firm’s plan for ensuring that its customers and device users correct or remove the device according to the firm’s instructions, and the need for public warnings about the device. As part of its oversight, FDA will review the strategy, and may suggest that the firm make changes to its approach for conducting the recall. Once the district office is notified about the recall, it should create a record in RES, notify CDRH, and obtain and evaluate information CDRH needs to make its classification decision. The district office monitoring the recall will provide any information it receives from the firm, including the correction and removal report, to CDRH so it can begin the process of classifying the recall. For some recalls, the district office may need to conduct a recall inspection at the establishment where the device is manufactured in order to obtain additional information needed to classify the recall. According to FDA’s procedures, when a recall appears to involve significant health risks, an inspection should be conducted to determine, among other things, the root causes of the problem and if the firm is implementing appropriate corrective action. The inspection may be performed by the FDA district office monitoring the recall or other district offices, such as those located near the firm’s manufacturing establishment. To classify the recall, CDRH is to conduct its own health risk assessment of the device being recalled. Based on this assessment, CDRH classifies the recall to indicate the relative degree of health hazard presented by use of the device. According to CDRH’s procedures, the classification decision should be completed within 31 calendar days from the time it received the information from the district office. Recalls are classified into one of three categories: class I—reasonable probability that the use of, or exposure to, a device will cause serious adverse health consequences or death. These are the most serious recalls. class II—use of or exposure to a device may cause temporary or medically reversible adverse health consequences, or the probability of serious adverse health consequences is remote. class III—use of, or exposure to, a device is not likely to cause adverse health consequences. Table 1 compares FDA’s classification of medical devices and recalls according to risk. It is important to note that FDA’s device and recall classification schemes carry opposite designations. The potential degree of health risk associated with device classes is designated from class III (high) to class I (low), while the potential risk associated with recall classes is designated from class I (high) to class III (low). Once the recall is classified, FDA is to notify the firm, in writing, of the assigned recall classification. This classification letter should also include instructions about the extent to which the firm should conduct effectiveness checks—that is, contacting customers and device users to determine whether the recall notification was received and acted upon appropriately. In general, for class I recalls, FDA recommends that firms conduct effectiveness checks with 100 percent of customers and device users affected by the recall. For class II recalls, FDA recommends effectiveness checks with 10 percent of such customers and device users, and 2 percent for class III recalls. During this phase, the firm and recall stakeholders are supposed to implement the recall as outlined in the approved recall strategy, and FDA is responsible for monitoring the progress made. Once a recall is under way, the firm is to conduct effectiveness checks to ensure that those stakeholders affected by the recall have received notification about the recall and have taken appropriate action, such as returning defective devices, or taking actions to correct the known defects. (See app. I for information on tools to help customers and medical device users identify recalled devices and an FDA initiative intended to better track devices through the use of unique identifiers.) Additionally, at the request of the FDA district office responsible for monitoring the recall, the firm is expected to provide status reports on the progress of the recall. These reports should include information on how many customers and device users have received the recall notification and followed the firm’s instructions, and how many still need to respond to the recall notice. The FDA district office reviews the reports, and, using RES, assigns the recall a status of ongoing if the reports indicate the recall is still under way. During the recall, FDA district offices independently assess the effectiveness of the recall by conducting audit checks. According to the agency’s procedures, for each check, investigative staff from one or more of FDA’s district offices will contact individual distributors or device users. These audit checks are generally conducted in person or by telephone, to confirm that the distributor or device user (1) received notification from the firm about the recall and (2) properly corrected or removed the recalled devices in accordance with the firm’s recall strategy. The FDA district office responsible for monitoring the recall assigns the audit checks to one or more of the district offices, depending upon the location of the firm’s customers and the device users. According to FDA procedures, the district office monitoring the recall should assign audit checks within 10 days of the recalling firm’s initiation of the recall. The audit checks should be completed by FDA investigators, if possible, within 10 days of assignment. If an investigator determines that the firm and the distributor or device user followed the recall strategy, the investigator’s audit check should conclude that the recall was effective. If not, the investigator’s audit check should conclude that the recall was ineffective. The result of the audit check is documented on a standardized FDA form, and each form is provided to the district office that made the audit check assignment. Once the firm believes it has completed the recall—i.e., done everything as outlined in the recall strategy—it needs to submit a final recall status report/recall termination request to the FDA district office monitoring the recall. Regardless of the class of the recall, if the district office agrees that the firm has completed the recall, it is to change the status of the recall in RES to completed. If it disagrees, it generally requests the firm to take additional actions, such as re-contacting customers and device users. The FDA district office bases its assessment of whether the recall has been effectively completed by reviewing the firm’s status reports and results of the audit checks. In addition, according to FDA procedures, the final monitoring step the district office may take is to conduct a limited postrecall inspection to verify that the recall has been completed. During this inspection, investigators should witness destruction or reconditioning of the recalled product, if applicable. Once the district office considers a recall completed, FDA assesses whether it can terminate a recall. As part of its assessment, FDA may review a corrective and preventive action plan submitted by the recalling firm that describes the firm’s actions to prevent a recurrence of the problem that led to the recall. Thus, this phase of the recall process is important because it provides FDA with the opportunity to determine whether the firm has taken sufficient corrective and preventive actions. The agency’s procedures state that if a firm’s corrective and preventive actions are adequate, FDA staff should terminate a recall within 3 months of completion. When terminating a class I recall, the district office sends a recall termination recommendation to CDRH. CDRH reviews the recalling firm’s corrective and preventive action plan, and effectiveness and audit check results, and makes the decision on whether to terminate the recall. The district office does not need CDRH approval to terminate class II and III recalls. If corrective actions are determined sufficient, the recall status in RES is changed from completed to terminated. When FDA terminates a recall, the district office will close the recall file and notify the firm, in writing, that it can cease recall activity. Figure 1 displays the general process from initiating to terminating a recall. From 2005 through 2009, firms initiated 3,510 medical device recalls. Most of these were for medical devices in five areas of use or medical specialty areas. On average, the recall process took just over 420 days from initiation to termination, with class I recalls (the highest-risk recalls) averaging nearly 520 days. FDA has not routinely analyzed information about recalls to aid its oversight of the recall process, and thus could not explain trends in recalls over this time period. Between January 1, 2005, and December 31, 2009, firms initiated 3,510 device recalls, an average of just over 700 per year. The annual volume fluctuated over this period, and ranged from a low of 658 in 2006 to a high of 796 in 2008. FDA classified the vast majority of all recalls—nearly 83 percent—as class II, meaning use of these devices may cause temporary adverse health consequences (moderate risk). FDA classified 14 percent as class III, meaning use of the device is not likely to cause any adverse consequences (lowest risk); and 4 percent were classified as class I (highest risk), because FDA determined that there was a reasonable probability that the use of or exposure to a violative product would cause serious adverse health consequences or death (see fig. 2). The number of class I recalls initiated between 2005 and 2009 ranged from 17 to 41. For example, in 2007, 25 class I recalls were initiated; in 2008, 17 were initiated; and in 2009, 41 were initiated. In comparison, the number of class II recalls generally increased each year and consistently exceeded 500 annually. Our analysis found that approximately 60 percent of recalls during this period were for devices from five areas of use or medical specialty areas— cardiovascular, radiological, orthopedic, general hospital and personal use, and diagnostic chemistry. According to FDA, these medical specialties are among those with the greatest number of devices on the market and four of the five specialties—cardiovascular, radiological, orthopedic, and general hospital—account for the greatest number of devices cleared or approved for marketing each year. The remaining recalls were for devices in 19 other areas (such as general and plastic surgery and neurological devices); no other specialty accounted for more than 8 percent of recalls (see table 2). As table 2 shows, for class I recalls, the greatest numbers were for devices from the cardiovascular medical specialty. In addition, the table shows that devices from the general hospital and personal use and diagnostic chemistry medical specialties accounted for a substantial number of class I recalls. Among class I recalls, we found that the largest number for cardiovascular devices involved automatic external defibrillators. The largest number of recalls for general hospital and personal use devices involved infusion pumps, including implantable programmable pumps. RES also contains information on the root cause of recalls, that is, the problem creating a need for the recall. On average, in 2008 and 2009 (the only years for which FDA tracked these data in RES) the greatest numbers of recalls were caused by problems with manufacturing processes. FDA refers to this root cause as process control—developing, conducting, controlling, and monitoring production processes to ensure that a device conforms to its specifications. Other leading causes were device design and software design. The two most common causes of class I recalls were the same as for all classes—process control and device design—but the third cause was component design or selection (see table 3). In general, FDA officials indicated they do not believe that there is a relationship between root cause and recall class. However, FDA officials indicated that some root causes of recalls are more likely to affect certain types of devices. For example, they stated that the root cause “incorrect or missing expiration date” is typically related to devices that involve sterilization. Among all classes of recalls, we found that a higher proportion of recalls were for devices which were cleared for market through the 510(k) process as compared to other FDA review processes. This reflects the fact that the overwhelming majority of devices—99 percent, according to FDA—enter the market through this review process. Our analysis of RES data for 2,773 recalls found that 87 percent of recalls involved a device cleared through the 510(k) process, nearly 8 percent involved a device approved through the more stringent PMA or PMA supplement process, and nearly 6 percent involved devices that were cleared through the 510(k) process and approved through the PMA process, or that were exempt from FDA review. We found similar trends for 101 class I recalls. We found that 74 of the recalls (73 percent) were for devices cleared through the 510(k) process, 22 percent were for PMA- approved devices, and the remaining 5 percent involved devices that were cleared through the 510(k) process and approved through the PMA process, or that were exempt from FDA review. Compared to all recall classes, a higher percentage of class I recalls involved devices cleared through the PMA process (22 percent compared with 8 percent for all classes of recalls combined), which likely reflects the high risk of these devices. Additionally, we found that 14 of those 74 class I recalls involving devices that were cleared through the 510(k) process were for devices that FDA designated as high-risk devices—class III devices. We further found that all 14 recalls involved cardiovascular devices, including 12 for automatic external defibrillators. At the time of our review, the 3,510 medical device recalls initiated from 2005 through 2009 were in various stages of the recall process. Approximately 60 percent—2,050—of all recalls initiated in this period had been terminated by FDA as of April 16, 2010, the date we received data from FDA. Firms had completed another 5 percent and were awaiting FDA’s review and decision on termination. The remaining 36 percent were ongoing (see fig. 3). We found that for recalls that had been terminated, the time between the firm’s initiation and FDA’s termination of a recall varied by class. On average, over 420 days passed between initiation of a recall and FDA’s termination. Among all classes of recalls, class I recalls took the longest— on average 516 days, while on average class II recalls took 436 days and class III took 352 days. The amount of time needed to conduct and terminate recalls was split roughly evenly between the portions of the process that are primarily the recalling firms’ responsibilities—conducting the recall itself—and the portions that are primarily FDA’s responsibility— oversight of the recall (see fig. 4). FDA frequently did not meet its 3 month time frame for terminating completed recalls. It did not meet this time frame for more than half of all recalls and over 70 percent of class I recalls (see fig. 5). On average, FDA took 192 days to terminate a recall after it determined a recall was completed, more than twice the time specified in its procedures. For class I recalls, the average was 250 days. FDA could not specifically identify reasons that explained why it took this amount of time to make termination decisions. The agency did indicate that termination time frames are affected by both FDA’s ability to address recalling firms’ termination requests, as well as firms’ ability to provide adequate information in support of the termination decision. This information may include a sufficient corrective and preventive action plan to prevent a recurrence of the problem which led to the recall. These data indicate that the timeliness of recall termination decisions appears to have deteriorated since our 1998 report. At the time of our review, 36 percent—1,268 recalls—were ongoing. Of these, most had been initiated in the past few years; however, some have been ongoing since 2005, the beginning of our review period. Of those recalls that were ongoing, most were initiated in 2008 and 2009; however, 456 (36 percent) had been ongoing for at least 2 years, including 86 that had been ongoing for nearly 5 years. Although RES contains numerous data elements that would allow for analyses of recall data, FDA is not effectively using these data to identify whether there are systemic problems underlying recalls. Instead of using RES to conduct systemic analyses of recalls, which would be consistent with one of the agency’s strategic goals—improving the quality and safety of manufactured products in the supply chain—FDA has used RES primarily for processing and tracking the progress of individual recalls. Agency officials have not been using RES as a management tool to conduct broad surveillance of recalls and related issues. Neither the district offices we contacted nor CDRH officials prepared routine reports that would enable officials to identify areas of potential concern in the recall process, such as recalls that have been ongoing for an extended period, or whether specific manufacturing or design problems are causing increases in recalls or the types of devices being recalled. In fact, FDA officials appeared to be unaware of RES’s capability to generate summary data. When we requested data from RES, FDA staff were unable to extract these data themselves, and initially indicated that it would be impossible to obtain data from RES. After 2 months, FDA officials concluded that through a special arrangement with a contractor they could obtain the RES data and meet our request. After we completed our analysis of the RES data, we provided key summaries to FDA officials, and asked them to comment on trends that we observed. Officials indicated that they have not fully analyzed these data and could not explain trends without extensive research of individual case files. They indicated that at most, they could offer speculation about some of the trends we observed. For example, they could not explain why the majority of recalls are class II, why class I recalls more than doubled between 2008 and 2009, or why many recalls had been ongoing for 5 years. Officials also could not provide definitive answers when we asked them to comment on other related topics, such as: common causes of recalls; trends in the number of recalls over time; variation in the numbers of recalls by recall classification levels; types of devices and medical specialties of devices accounting for most recalls; the length of time needed for firms to complete recalls; and the length of time needed for FDA to terminate recalls. Although FDA has not been routinely analyzing recall data to identify whether there are systemic problems affecting recalls, officials indicated they have used these data to help direct their inspection resources, and to support compliance and enforcement actions. First, FDA officials indicated they use recall information as one of many elements to assess the relative risks that device manufacturers present, and thus which firms the agency should inspect in a given year. For example, the officials said that recall data is one of several elements that feed into a predictive model that determines the likelihood that firms are out of compliance with applicable laws or regulations, and therefore in need of inspection. Second, they told us they have plans to use recall information as the basis for developing a directed inspection plan. As part of this project, officials would use recall information to identify those firms that generate a large number of recalls, and target them for inspection. Officials indicated that these inspections would focus on specific areas—such as a particular manufacturing process. This effort is still in the planning phase, and officials have not yet established criteria, such as what constitutes a large number of recalls, for determining which firms to select. The officials also indicated that progress may be slow because they do not have sufficient resources available to devote to this effort. Although FDA has not regularly been using data to identify systemic problems, we found one example of FDA using recall data to detect and address safety issues with a particular type of device. In December 2010, FDA held a conference on a variety of issues related to automatic external defibrillators, including the safety of these devices. During this conference it presented historical recall data to help demonstrate the need for a specific focus on safety improvements for this type of device. Gaps in the medical device recall process limit firms’ and FDA’s ability to ensure that the highest-risk recalls are implemented effectively and terminated in a timely manner. We found that both FDA and recalling firms generally upheld their respective responsibilities in the course of initiating and classifying recalls. However, FDA did not always follow its own procedures and some procedures are unclear. FDA did not consistently inspect the manufacturing establishments of recalling firms as outlined in the agency’s procedures. FDA has also not established criteria, such as thresholds, based on the nature of devices, for assessing whether firms effectively completed recalls by correcting or removing a sufficient number of recalled devices. Further, we found that firms face challenges, such as locating specific devices or users of devices, and often could not correct or remove all devices. We also found that audit checks, a key mechanism for FDA’s oversight of firms’ conduct of recalls, are limited in scope. In addition, because of a lack of clarity in FDA’s audit check procedures, they have been implemented inconsistently by FDA’s district offices. Finally, FDA frequently failed to make recall termination decisions in a timely manner, and kept no documentation to justify its termination decisions. In our review of a sample of the highest-risk device recalls initiated from January 1, 2005, through December 31, 2009, we found that once firms initiated recalls, they generally provided FDA with a correction or removal report in a timely manner—within FDA’s 10-day time frame. In 51 of the 53 recalls (96 percent), firms submitted a correction or removal report to FDA. For 43 of these 51 recalls, firms submitted the report within 10 working days of initiating the recall. For 6 of the remaining 8 recalls, the correction and removal report was submitted within 21 business days; the reports for the other 2 recalls were submitted 62 business days and 227 business days after the recall was initiated, respectively. Table 4 shows the proportion of recalls, by district, where firms submitted these reports and whether they were submitted within 10 working days. Although our analysis indicates that firms generally provided these reports after initiating the recalls, FDA officials cautioned that this does not mean firms fully complied with the regulatory reporting requirements. They indicated that in some cases, firms’ initial correction or removal reports lack some of the information needed and extra time was required for firms to provide additional information. To help address this, officials indicated that in November 2010 they began a recall process improvement project. As part of this initiative, CDRH plans to develop Web-based training modules for industry clarifying the information that needs to be provided when reporting corrections and removals to FDA. FDA infrequently—in less than one-half of the recalls—conducted an establishment inspection upon learning of a recall. According to FDA’s procedures, upon learning of a potential class I recall, district offices should conduct establishment inspections to obtain further information about the recall. We found that FDA conducted such recall-related inspections for 20 of the 53 class I recalls we reviewed. The frequency of inspections varied across the four district offices monitoring the recalls. Three of these offices (Detroit, Los Angeles, and New England) conducted recall-related establishment inspections upon the initiation of a recall between 25 percent and 38 percent of the time, while the Minneapolis district office conducted them in 62 percent of recalls (see fig. 6). Based on interviews with FDA officials in four district offices, we found that decisions to conduct such inspections, given their overall inspection workload, are a matter of resources and timing. Some district officials also said that the decision to conduct a recall-related inspection is based on the firm’s recall history and indicated that FDA may be less likely to inspect a firm with a history of completing recalls successfully. Finally, some of these officials FDA said that this is because firms that have successfully completed recalls generally provide the necessary information, such as determinations of the root cause of the recall, as part of their correction or removal reports. FDA generally followed its procedures by classifying each of the 53 recalls in our sample and providing written notification to the recalling firms. However, for 28 of the 53 recalls, FDA did not make its classification determination within 31days as outlined in its procedures. The amount of time from recall initiation to classification varied, ranging from a few days to several months, with an average of 47 days. Representatives from two device manufacturer associations and several device manufacturers expressed concern about the length of time it can take FDA to classify recalls. For a class I recall, firms must make greater efforts to identify and contact customers than for class II recalls. Thus, delays in FDA’s classification can affect firms’ decisions. For example, officials indicated that if they send out a recall notice that they believe will be a class II, and after a significant amount of time FDA informs them it is a class I recall, the firm will have to revise the notice to indicate that the risks posed by the recall were more severe than they initially anticipated. The firm will also have to identify additional customers and device users to contact, in order to meet FDA’s recommendation that they conduct 100 percent effectiveness checks for class I recalls. Firm officials said that they will then send out the revised notice, which can create confusion about whether this is a new recall or whether it is an update with new instructions for the already ongoing recall. Our review of firms’ action in conducting recalls found that the status of recalls varied and that firms face challenges in correcting or removing all recalled products. Of the 53 recalls we reviewed, we found 13 were ongoing, 10 were completed—meaning that an FDA district office concluded that the firm had essentially fulfilled their responsibilities for correcting or removing the devices—and 30 were terminated—meaning FDA headquarters determined that firms’ corrective actions taken were sufficient to prevent a recurrence of the problems that let to the recall (see fig. 7). Of the 40 recalls in our sample that were either completed or terminated— meaning that FDA concluded that the firm had taken sufficient effort to correct or remove recalled devices—we found that for 19 (48 percent) of these recalls, firms were able to correct or remove all products. In the other 21 recalls (53 percent) firms were unable to correct or remove all products. These recalls ranged widely, in both volume of devices subject to recall and the types of devices being recalled. Some recalls involved hundreds of thousands of disposable products, while others involved a small number of life-sustaining implantable devices. Although recalling firms took steps to notify customers and device users, they were often unable to correct or remove all devices. This was because firms could not locate some of the customers or device users, or these customers or device users could not locate the device subject to recall. In other cases this was because the devices had been disposed of (such as defective syringes), or were sold at retail outlets (such as glucose test strips) to individuals who may not have known about the recall. For example, in a recall of tracheal tubes included in certain pediatric medical kits, 1,400 tubes had been distributed, but only 200 were returned to the recalling firm. The firm said that the rest had likely been used. Finally, users occasionally were unwilling to return a device. For example, one recall involved a magnetic device designed to treat a variety of medical problems such as lower back pain, fibromyalgia, and arthritis. This device was never cleared or approved by FDA, and despite FDA warnings about the device, users who had purchased units refused to return them. Details concerning the 21 recalls for which firms were not able to correct or remove all devices are presented in appendix II. Our review of FDA’s actions for conducting and overseeing recalls revealed that FDA generally conducted audit checks for the class I recalls we reviewed, but we found unclear procedures led to numerous inconsistencies in how different investigators conducted these checks and made their determinations about the effectiveness of recalls. FDA conducted audit checks for 45 of the 53 recalls (85 percent) we reviewed. Our analysis of 2,196 audit check forms associated with these recalls found that audit checks completed for nearly 90 percent of the recalls contained a variety of inconsistencies in how the audit checks were implemented and documented. For each of these recalls we found inconsistencies in how different investigators determined whether a recall was effective or ineffective when conducting their audit checks of recalls. We also identified inconsistencies in the level of detail provided in the audit check report, and the level of effort undertaken by different investigators. Specifically, we found the following. Some investigators’ audit checks concluded that recalls were effective, despite noting problems (such as device users not following the firm’s instructions), while other investigators concluded that similar instances were ineffective. For example, in 2008 a firm initiated a recall of an implantable pump because of problems in the connection between a catheter and the pump, which could result in improper amounts of medication being delivered to a patient. The firm’s recall notification alerted physicians to this problem, and provided instructions for monitoring patients who already had the implanted pump and for revising future implant procedures. As part of the audit check program for this recall, FDA’s investigators contacted a sample of physicians to determine whether they received the notification and followed the instructions. Our review found that out of 68 audit checks, there were 14 instances where the investigators noted that physicians either did not receive the recall notification, or did not remember receiving it, and thus could not have followed the recall notice instructions. In 8 of these 14 instances, investigators concluded that the recall was ineffective, noting that the physicians did not implement the recall instructions. In contrast, in the other 6 instances they concluded the recall was effective, even though physicians could not have followed the recall instructions. In some cases this was because the firm provided evidence that they had notified the physician, and in others the investigator noted that the physician did not have any pumps on hand. Some investigators determined that device users were not notified of the recall by the recalling firm, but instead learned of the recall through other means. In some of these instances, investigators’ audit checks concluded that recalls were effective, while in other similar cases investigators concluded the checks were ineffective. Some investigators wrote detailed comments on the audit check form as to why the investigator determined the recall was effective or ineffective, while others did not. Without comments, it may be difficult for FDA supervisors and district recall coordinators to verify whether an investigator correctly determined whether the recall was effective or ineffective. Some investigators noted actions they took when they discovered problems with recalls, such as providing the device users with a copy of the recall notice or instructing them on actions to take in order to implement a recall. In contrast, other investigators did not indicate whether they made any attempt to help facilitate the recall. For example, in 2009 a firm initiated a recall of an automated external defibrillator because of reports that some of these devices failed to discharge sufficient energy due to problems with batteries. The firm issued a notice that instructed users to replace batteries and update software for the devices. As part of the audit checks for this recall, FDA investigators contacted a sample of users of the device, to check whether they received the recall notification and followed the firm’s recall instructions. Our review found that out of 67 audit checks, there were 35 instances where investigators noted problems with the recall—generally that the user did not receive the notice or failed to follow recall instructions. In 29 of these cases, the FDA investigator noted taking actions, including providing the recall notice or instructing the user to contact the recalling firm so they could obtain software needed to perform the needed actions. However, in 6 cases we found no indication that the FDA investigator took actions to ensure the recall was carried out effectively. FDA officials at both ORA and the district offices we contacted acknowledged that there are no detailed instructions or requirements for conducting audit checks, and that there can be inconsistencies in the process. Officials told us that when determining whether or not a check is effective, investigators should be assessing whether the recalling firm provided the notice and instructions to the customers or device users, and whether the customers or users followed instructions. They acknowledged, however, that some investigators may approach these checks differently, and that this may be an area where clarification of the agency’s procedures is needed. During our interviews with officials from the Detroit, Los Angeles, Minneapolis, and New England district offices, some officials said that audit checks are typically conducted by new investigators, and that investigators receive classroom and on-the-job training on how to conduct such checks. Some district officials also noted that audit checks are reviewed by a supervisor as well as the recall coordinator in the district office that is monitoring the recall, and this serves as a quality control function to ensure consistency. Also, officials from FDA headquarters and some district offices stated that they have attempted to institute measures to improve the audit check process. Specifically, they noted that they recently updated the audit check form to more precisely reflect what makes a recall ineffective. Also, ORA officials indicated that they plan to automate the audit check forms, which will make the forms accessible to officials in FDA’s headquarters. FDA officials said that they are considering applications for analyzing the automated data, but have not completed any specific plans. In addition to the inconsistencies, we found other gaps in FDA’s oversight related to the audit checks. First, FDA’s audit checks were often narrow in scope, in that FDA instructs investigators to contact only a small number of customers or device users—between 2 percent and 10 percent of those affected by the recall. Therefore, if there are thousands of customers or device users, the audit checks provide FDA with a means to contact a relatively small number of them. For the 45 recalls for which FDA completed checks, we found the number of audit checks conducted varied widely, from 2 to 271, with an average of 51 audit checks per recall. Second, FDA investigators did not always conduct the assigned number of audit checks. We compared the number of audit checks that should have been conducted based on the audit check assignments to the numbers of checks actually completed. We found that for 17 of the 45 recalls (38 percent) fewer than the assigned number were conducted. Third, even though most checks were done in person, consistent with FDA’s procedures, over 22 percent of the checks were done by telephone. In these cases, the audit check relied extensively on anecdotal information provided by the customer or device user. According to FDA, the number of checks it can perform is limited by available resources. Based on our review of files, we found that if patients or consumers are involved (e.g., if FDA needed to contact someone with an implantable device), these were often done by telephone. We also found checks that were done by telephone for other device users including hospitals, retailers, and doctors’ offices. We found FDA lacks specific criteria for making decisions about whether recalling firms have adequately completed their recalls—a key oversight activity of the recall process. FDA officials indicated they consider a recall complete when a firm has completed actions outlined in its recall strategy. In particular, they evaluate whether firms completed their assigned level of effectiveness checks, and have corrected or removed recalled devices in “an acceptable manner.” However, our review of FDA’s recall procedures found—and FDA officials confirmed—that the procedures do not contain any specific criteria or general guidelines governing the extent to which firms should be correcting or removing various types of devices before a recall should be considered completed. For example, FDA does not have a benchmark recovery rate or threshold to assess whether firms effectively completed recalls, although the recovery rates of devices could be expected to vary, depending on whether a recalled device was a large piece of hospital equipment or a disposable device, such as a syringe. Representatives from medical device firms stated that there are no criteria or guidance from FDA on the percentage of recalled products that must be corrected or removed. Further, these firm representatives said that FDA is generally satisfied with three attempts at communicating with customers and device users affected by the recall. In addition, for a majority of the class I recalls we reviewed, FDA’s actions to ensure that recalls were complete were inconsistent with its procedures for overseeing recalls. According to FDA’s procedures, districts should conduct a limited postrecall inspection to verify that the recall is complete, and to witness destruction of defective products, if applicable. In 21 of the 40 completed and terminated recalls (53 percent) we found no documented evidence that FDA took actions besides audit checks to verify that the recall was complete. In the other 48 percent of recalls, FDA made an assessment via inspection, witnessing destruction of devices, or verifying that software corrections were completed. Another gap we found in the recall process is that FDA does not maintain sufficient documentation to justify its termination decisions. Although FDA may request that firms submit corrective and preventative action plans for review and approval before a recall can be terminated, we found little documentation on how FDA assessed whether such plans were sufficient when it terminated recalls. When we asked to review documentation justifying the decisions for the terminated recalls in our sample, FDA officials indicated that they do not maintain extensive documentation justifying the basis for their termination decisions. They told us that creating documentation to support concurrence with the termination recommendation is not part of past or current termination procedures. This approach is inconsistent with internal control standards for the federal government, which indicate “that all transactions and other significant events need to be clearly documented” and stress the importance of “the creation and maintenance of related records which provide evidence of execution of these activities as well as appropriate documentation.” Without such documentation, we were unable to assess the extent to which FDA’s termination process appropriately evaluated recalling firms’ corrective actions. Also, we found that FDA termination decisions were frequently not made in a timely manner—within 3 months of the completion of the recall— increasing the risk that unsafe or defective devices remained available for use. Of the 53 files in our sample, 30 were terminated—meaning FDA headquarters determined that firms developed sufficient corrective actions to prevent a recurrence of problems which led to the recalls. For 73 percent of the terminated recalls, FDA did not make its termination decision within 3 months of the recall’s completion, as indicated by FDA procedures. Overall, termination decisions took between 10 and 800 business days from completion to termination, with an average of 187 business days. Failure to make termination decisions in a timely manner increases the risk that patients and healthcare providers may continue to use unsafe or defective devices. For example, one firm requested termination from FDA for its recall of a portable external defibrillator in February 2006. However, FDA did not begin its termination assessment until May 2010. In this case, officials indicated that, due to staff turnover in the district office, they were unaware that this recall was still ongoing until a new recall coordinator searched for ongoing recalls. In 2010, following an FDA inquiry, the firm stated that it had not received confirmation of a required upgrade from 91 end users and an additional 13 devices could not be located. Because FDA did not follow up on this recall until 2010, 4 years had elapsed before the agency became aware that the recalling firm had not corrected or removed a substantial number of devices subject to the recall. According to FDA officials, their ability to terminate recalls in a timely manner is affected by resources, and termination decisions are a lower priority than other issues because the recalling firm has completed its actions. We found at least one instance where FDA’s failure to make a timely termination assessment allowed for a potentially unsafe product to be reintroduced into the market and used for surgical procedures. In this case, based on adverse event reports that screws in its spinal fixation system were becoming loose postoperatively, the firm decided to recall the device in December 2005. The firm implemented its recall and removed all devices. The firm indicated that it developed a corrective action for the screw problem, and relaunched the device in April 2006. It then requested termination from FDA in May 2006. FDA followed up on this request by leaving three voice mail messages with the firm, and received no response. The agency sent out a request for information a year later, in May 2007. In June 2007, the company again indicated that the recall was complete, and requested termination. In September 2007, FDA conducted an inspection of the company’s manufacturing facility, and found that while the recall was complete, the corrective action was not adequate. Over the course of the next 2 years, the firm worked with FDA to get revisions to the device approved, but eventually agreed to a second recall for the revised device. This recall was initiated in May 2009. We identified five reports of adverse events related to continuing problems with the implanted device that were filed with FDA subsequent to the firm’s relaunch of the device in April 2006. These reports were filed from December 2006 through March 2007, and revealed that in all cases, patients required surgical intervention to correct or remove the device. The medical device recall process is complex, requiring the coordination and timely action of potentially thousands of parties. It is an important tool used by firms and FDA to protect the public and mitigate health risks from unsafe or ineffective devices. While the recall process may not eliminate 100 percent of health risks associated with recalled devices, careful implementation and evaluation are critical to minimizing health risks. FDA has a key role in identifying and minimizing the public health risks presented by defective or unsafe devices. In this regard, FDA has opportunities to close some of the gaps that currently exist in the medical device recall process, and enhance its oversight of device recalls. As currently structured, FDA’s approach to oversight of medical device recalls is reactive—responding to individual recalls as they occur. Rather than pursuing a strictly case-by-case approach to overseeing recalls, FDA could take a more proactive approach to its oversight. The agency has a plethora of data available on thousands of recalls, but at present, is not effectively reviewing and analyzing these data in a systematic manner. More routine analyses of these data could help FDA identify trends in the numbers and types of devices being recalled, as well as the underlying causes of device recalls. Such information would provide FDA with a better understanding of the risks presented by defective or unsafe devices, which could lead the agency to proactively identify strategies and measures needed to address systemic problems with the design or manufacture of individual devices or entire categories of devices. Armed with the results of these types of analyses, FDA could then be in a position to help mitigate safety risks before they occur, and thus minimize the need for recalls. This is particularly important for the devices involved in the highest-risk recalls, which place the public at risk of serious health consequences, including death. Furthermore, while the agency has devoted substantial resources to monitoring individual recalls, opportunities for enhancing its oversight of specific recalls also exist. A key FDA mechanism for overseeing individual recalls—audit checks of a small portion of customers and device users involved in the recall—are often implemented inconsistently. This is due to unclear procedures that investigators are using for implementing and documenting audit checks and making their final assessments. As a result, investigators can make inconsistent determinations about whether firms, customers, and device users have effectively conducted a recall. Additionally, FDA lacks clear criteria for determining whether firms have successfully completed recalls, and has failed to maintain important documentation justifying its decisions to terminate the highest-risk recalls. This impedes independent assessments of FDA’s decision making and leaves the agency vulnerable to questions about the basis it used to determine that recalling firms fulfilled all their responsibilities when conducting recalls. By addressing these weaknesses, FDA could reduce the risk that defective or unsafe medical devices remain on the market, potentially endangering public health. To enhance FDA’s oversight of medical device recalls, and in particular, those medical device recalls that pose the highest risk, we recommend that the Commissioner of FDA take the following four actions: Create a program to routinely and systematically assess medical device recall information, and use this information to proactively identify strategies for mitigating health risks presented by defective or unsafe devices. This assessment should be designed, at a minimum, to identify trends in the numbers and types of recalls, devices most frequently being recalled, and underlying causes of recalls. Clarify procedures for conducting medical device recall audit checks to improve the ability of investigators to perform these checks in a consistent manner. Develop explicit criteria for assessing whether recalling firms have performed an effective correction or removal action. Document the agency’s basis for terminating individual recalls. We provided a draft of this report to HHS for review. HHS’s written comments are reprinted in appendix III. HHS agreed with our conclusions and recommendations and stated that the agency is committed to exploring each of our recommendations fully. HHS reported that FDA plans to convene a working group to both evaluate improvements to the recall process and to develop strategies to implement our recommendations. According to HHS, FDA recognizes that standardized guidance will strengthen the management of the recall process. In addition, HHS elaborated on some of FDA’s efforts to enhance its oversight by, for example, developing more routine analysis and reporting of recall data. HHS also provided technical comments, which we incorporated as appropriate. We are greatly encouraged by the agency’s response, and believe its expeditious implementation of the recommendations will serve to enhance the safety of medical devices used by millions of Americans each day. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Commissioner of FDA and appropriate congressional committees. The report also will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Several key stakeholders involved in the medical device recall process, including recalling firms, device distributors, and device users—such as hospitals—share responsibilities for effectively implementing recalls. To implement an effective recall, stakeholders need mechanisms to ensure timely and open communication about the recalls, and a means of locating devices subject to recall. This appendix describes the recall notification and tracking systems available to help manage device recalls. It also provides information on the status of the Food and Drug Administration’s (FDA) unique device identification (UDI) initiative—which is intended to enable the identification of a device throughout distribution and use. To obtain this information, we interviewed officials from FDA and key stakeholders, including representatives of firms providing subscription- based recall alert information, manufacturers, distributors, group purchasing organizations, hospital systems, and patient safety groups. Through these interviews we obtained information on what these stakeholders considered to be the key challenges they face in implementing device recalls. We also obtained information about mechanisms which, in particular, hospital systems use to help identify devices subject to recall. Further, we reviewed FDA’s progress in implementing its UDI initiative. To accomplish this, we examined published studies on this initiative, reviewed stakeholder comments submitted for FDA’s public meetings on the UDI, and interviewed FDA officials responsible for managing the UDI program. We learned through stakeholder interviews that instead of relying solely on notifications from medical device manufacturers, health care providers and other stakeholders have come to rely on other sources of information to stay abreast of potential recalls. Furthermore, these electronic communication technologies are evolving and, over time, stakeholders have begun recognizing that they can play a role in helping to effectively implement recalls. Our interviews with stakeholders revealed that a number of different privately developed, subscription-based electronic notification and tracking systems are available to help identify and process recalls. Stakeholders indicated that these systems are primarily used by hospitals, but that the systems are available to others involved in recalls as well. Available services identify recalls from a number of sources, including device manufacturers and FDA’s Web site. The services compile lists of recalls and send electronic messages about recalls to paid subscribers. These services vary in sophistication and price. One service we learned about was limited to periodic electronic notification of all recalls at a cost of about $500 per year. Others include software to help individual hospitals specifically delegate responsibility within their hospital system to specific officials who will manage certain aspects of the recalls—such as removing recalled products from inventory—and for tracking the progress of the recalls. These systems can cost several thousand dollars per year. Owners of these systems that we spoke to indicated that hundreds of hospitals subscribe to their systems (see table 5). Stakeholders we interviewed identified several operational benefits of using recall notification systems. First, they indicated these systems allow for an increased ability to identify the universe of recalls rather than simply relying on receiving notices from recalling firms. Second, stakeholders indicated that quality assurance measures used by the systems help ensure that the recall notifications contain sufficiently detailed and accurate information. They indicated that personnel working for such systems will review the recall notifications they compile prior to sending them to the subscribers. If needed, those providing the recall alert services will contact the recalling firm and update the notice for the subscribers if information is unclear or missing. Third, stakeholders indicated that such systems can help ensure that recall notifications are routed to specific personnel within an institution responsible for managing the recall, reducing the likelihood that implementing the recall is delayed or overlooked. Officials from some hospitals we spoke with indicated that manufacturers will frequently notify the department in a hospital that received the product, which may not be the best point of contact for ensuring recalled devices are corrected or removed. However, one hospital system indicated that by using the more sophisticated alert systems, they are able to automatically forward recall alerts to key personnel specifically identified by the hospital. This ensures that only the appropriate departments at the hospital are alerted. Finally, stakeholders stated that these systems allow hospitals to identify and process recalls sooner. The Food and Drug Administration Amendments Act of 2007 (FDAAA) required FDA to develop a UDI system—a major initiative to better track and identify devices. Through the UDI, FDA plans to require that the label of a device bear a unique identifier that is able to identify the device throughout its distribution and use. Figure 8 displays an example of the key attributes that might be included in a UDI. In this example, the label includes key information, such as a device’s lot number and expiration date which could be scanned into databases. FDA has been working on the UDI since 2005, prior to the enactment of FDAAA, and has made some preliminary decisions about the system. According to FDA, it currently has a proposed schedule that calls for implementation of the UDI in phases over several years. Key activities completed for the UDI include the following. April 14-15, 2005: FDA held a workshop to obtain comments from various stakeholders on the UDI. A draft report about the UDI prepared by a contractor for FDA, known as “The White Paper,” was provided to attendees prior to the workshop to use as background for workshop discussions. August 17, 2005: The White Paper was issued and provided information on technologies and standards available for the UDI initiative and the possible benefits of automatic identification of devices. The paper also identified key issues FDA should consider moving forward, including costs of the UDI. Also, the paper incorporated stakeholder comments from the workshop held in April 2005. March 22, 2006: Another contractor issued a report outlining the possible benefits of the UDI and decisions FDA must make to implement the system, including the technology needed to use the UDI. August 11, 2006: FDA formally solicited comments in the Federal Register for the UDI initiative. September 27, 2007: FDAAA enacted, requiring FDA to develop and implement the UDI. February 12, 2009: FDA held a public workshop on the UDI to identify remaining issues related to the establishment of a UDI system and to request comments on this topic. November 20, 2009: FDA published the results of a pilot test of the UDI. The results included several recommendations for the future of the UDI including specific enhancements that could enhance the UDI’s functionality. November 30, 2010: Another report on pilot activities was published containing feedback from organizations that will label the devices and internal FDA stakeholders. The report stated that fewer concerns remain as FDA is close to releasing the UDI regulation. According to FDA, the UDI implementation schedule calls for a phased approach that will take several years to reach full-scale implementation. FDA is currently working on a proposed rule and intends to publish it and seek public comments in spring 2011, and issue a final rule 12 to 18 months later. According to FDA’s senior advisor for the UDI, the proposed rule will include several key decisions that FDA, based on its prior studies, has reached regarding the UDI. These key decisions include the following. Provisions for a UDI database that FDA will maintain. Manufacturers will send key information about their devices to FDA, which will maintain a database containing a device identifier for all devices distributed in the United States. Flexibility to allow manufacturers to decide how to label their devices using automatic identification and data capture. This could mean using a linear or two-dimensional bar code, or radio frequency identification. In addition, FDA indicated that there are other issues for which they have not yet made final decisions, and they are still assessing these before they issue a proposed rule. These include the following. The labeling requirements for different devices, for example, riskier devices may be labeled with a unique identifier individually, while disposable, low-risk devices may be labeled based on how they are packaged (e.g., bandages will have their UDI identifier on their box). If the UDI should have a phased implementation schedule for administering the identifiers, for example, class III devices—the most risky devices, including some that are implantable—may use the UDI within 1 year of publishing the final rule, while class II and class I devices might follow meeting the UDI requirement within 3 and 5 years, respectively. Figure 9 presents a timeline of key activities since FDA began assessing the UDI and its planned implementation schedule. An FDA official said that the agency expects that the UDI will provide benefits beyond increased precision in identifying recalled devices, and that some benefits of the UDI will be realized immediately after its implementation. According to the UDI senior advisor, these benefits include improved tracking of adverse events associated with medical devices and prevention of device counterfeiting. He also stated that many manufacturers already use identifiers on their devices and should have little problem adapting to the new UDI system. Despite the potential benefits of the UDI, some stakeholders expressed concern that the success of UDIs depends on hospitals’ ability to utilize these identifiers, and that it may be years before the benefits to the recall process are realized. Manufacturers we contacted stated that many hospitals do not use the lot and serial numbers currently provided by manufacturers to track devices, and FDA does not have authority to require providers to use the UDI. This concern was also reflected in comments from officials at several hospitals that we contacted. Some indicated that they do not have inventory systems in place that enable them to track devices throughout their hospitals. Therefore, they must manually search their inventory, sometimes at multiple locations. Locating a recalled device can be particularly difficult because a device may contain multiple identification numbers assigned by manufacturers and distributors for their own tracking purposes. Without upgrades to these hospitals’ systems, officials acknowledged that the UDI will be less effective in enhancing patient safety. FDA’s UDI senior advisor stated that larger hospitals might be more eager to adopt the technology necessary to track devices using the UDI once it is implemented, but acknowledged that benefits for the recall process are greatly dependent on hospitals’ implementation of the UDI, which could take up to 10 years for many hospitals, especially smaller ones. In some instances recalling firms are not able to correct or remove all of the devices subject to a recall. Of the 53 class I recalls in our sample of recalls that were initiated during the period of January 1, 2005, through December 31, 2009, there were recalled medical devices that firms were unable to correct or remove. Table 6 includes information on the number of devices subject to these 21 recalls, the number corrected or removed, and if available, reasons firms provided to FDA explaining why they could not correct or remove 100 percent of the devices. In addition to the contact named above, Geri Redican-Bigott, Assistant Director; Kaycee Glavich; Cathleen Hamann; Eagan Kemp; Julian Klazkin; Zachary Levinson; David Lichtenfeld; Daniel Ries; Christina C. Serna; and Katherine Wunderink made key contributions to this report.
Recalls are an important tool to mitigate serious health consequences associated with defective or unsafe medical devices. Typically, a recall is voluntarily initiated by the firm that manufactured the device. The Food and Drug Administration (FDA), an agency within the Department of Health and Human Services (HHS), oversees implementation of the recall. FDA classifies recalls based on health risks of using the recalled device--class I recalls present the highest risk (including death), followed by class II and class III. FDA also determines whether a firm has effectively implemented a recall, and when a recall can be terminated. This report identifies (1) the numbers and characteristics of medical device recalls and FDA's use of this information to aid its oversight, and (2) the extent to which the process ensures the effective implementation and termination of the highest-risk recalls. GAO interviewed FDA officials and examined information on medical device recalls initiated and reported from 2005 through 2009, and reviewed FDA's documentation for a sample of 53 (40 percent) of class I recalls initiated during this period. From 2005 through 2009, firms initiated 3,510 medical device recalls, an average of just over 700 per year. FDA classified the vast majority--nearly 83 percent--as class II, meaning use of these recalled devices carried a moderate health risk, or that the probability of serious adverse health consequences was remote. Just over 40 percent of the recalls involved cardiovascular, radiological, or orthopedic devices. FDA has used recall data to monitor individual recalls and target firms for inspections. However, it has not routinely analyzed recall data to determine whether there are systemic problems underlying trends in device recalls. Thus, FDA is missing an opportunity to use recall data to proactively identify and address the risks presented by unsafe devices. Several gaps in the medical device recall process limited firms' and FDA's abilities to ensure that the highest-risk recalls were implemented in an effective and timely manner. For many high-risk recalls, firms faced challenges, such as locating specific devices or device users, and thus could not correct or remove all devices. FDA's procedures for overseeing recalls are unclear. As a result, FDA officials examining similar situations sometimes reached opposite conclusions on whether recalls were effective. FDA had also not established criteria, based on the nature or type of devices, for assessing whether firms corrected or removed a sufficient number of recalled devices. Additionally, FDA's decisions to terminate completed recalls--that is, assess whether firms had taken sufficient actions to prevent a recurrence of the problems that led to the recalls--were frequently not made within its prescribed time frames. Finally, FDA did not document its justification for terminating recalls. If unaddressed by FDA, the combined effect of these gaps may increase the risk that unsafe medical devices could remain on the market. To aid its oversight of the medical device recall process, FDA should routinely assess information on device recalls, develop enhanced procedures and criteria for assessing the effectiveness of recalls, and document the agency's basis for terminating individual recalls. HHS agreed with GAO's recommendations.
In October 1998, the EPA Administrator announced plans to create an office with responsibility for information management, policy, and technology. This announcement came after many previous efforts by EPA to improve information management and after a long history of concerns that we, the EPA Inspector General, and others have expressed about the agency’s information management activities. Such concerns involve the accuracy and completeness of EPA’s environmental data, the fragmentation of the data across many incompatible databases, and the need for improved measures of program outcomes and environmental quality. The EPA Administrator described the new office as being responsible for improving the quality of information used within EPA and provided to the public and for developing and implementing the goals, standards, and accountability systems needed to bring about these improvements. To this end, the information office would (1) ensure that the quality of data collected and used by EPA is known and appropriate for its intended uses, (2) reduce the burden of the states and regulated industries to collect and report data, (3) fill significant data gaps, and (4) provide the public with integrated information and statistics on issues related to the environment and public health. The office would also have the authority to implement standards and policies for information resources management and be responsible for purchasing and operating information technology and systems. Under a general framework for the new office that has been approved by the EPA Administrator, EPA officials have been working for the past several months to develop recommendations for organizing existing EPA personnel and resources into the central information office. Nonetheless, EPA has not yet developed an information plan that identifies the office’s goals, objectives, and outcomes. Although agency officials acknowledge the importance of developing such a plan, they have not established any milestones for doing so. While EPA has made progress in determining the organizational structure of the office, final decisions have not been made and EPA has not yet identified the employees and the resources that will be needed. Setting up the organizational structure prior to developing an information plan runs the risk that the organization will not contain the resources or structure needed to accomplish its goals. Although EPA has articulated both a vision as well as key goals for its new information office, it has not yet developed an information plan to show how the agency intends to achieve its vision and goals. Given the many important and complex issues on information management, policy, and technology that face the new office, it will be extremely important for EPA to establish a clear set of priorities and resources needed to accomplish them. Such information is also essential for EPA to develop realistic budgetary estimates for the office. EPA has indicated that it intends to develop an information plan for the agency that will provide a better mechanism to effectively and efficiently plan its information and technology investments on a multiyear basis. This plan will be coordinated with EPA ‘s agencywide strategic plan, prepared under the Government Performance and Results Act. EPA intends for the plan to reflect the results of its initiative to improve coordination among the agency’s major activities relating to information on environment and program outcomes. It has not yet, however, developed any milestones or target dates for initiating or completing either the plan or the coordination initiative. In early December 1998, the EPA Administrator approved a broad framework for the new information office and set a goal of completing the reorganization during the summer of 1999. Under the framework approved by the EPA Administrator, the new office will have three organizational units responsible for (1) information policy and collection, (2) information technology and services, and (3) information analysis and access, respectively. In addition, three smaller units will provide support in areas such as data quality and strategic planning. A transition team of EPA staff has been tasked with developing recommendations for the new office’s mission and priorities as well as its detailed organizational and reporting structure. In developing these recommendations, the transition team has consulted with the states, regulated industries, and other stakeholders to exchange views regarding the vision, goals, priorities, and initial projects for the office. One of the transition team’s key responsibilities is to make recommendations concerning which EPA units should move into the information office and in which of the three major organizational units they should go. To date, the transition team has not finalized its recommendations on these issues or on how the new office will operate and the staff it will need. Even though EPA has not yet determined which staff will be moved to the central information office, the transition team’s director told us that it is expected that the office will have about 350 employees. She said that the staffing needs of the office will be met by moving existing employees in EPA units affected by the reorganization. The director said that, once the transition team recommends which EPA units will become part of the central office, the agency will determine which staff will be assigned to the office. She added that staffing decisions will be completed by July 1999 and the office will begin functioning sometime in August 1999. The funding needs of the new office were not specified in EPA’s fiscal year 2000 budget request to the Congress because the agency did not have sufficient information on them when the request was submitted in February 1999. The director of the transition team told us that in June 1999 the agency will identify the anticipated resources that will transfer to the new office from various parts of EPA. The agency plans to prepare the fiscal year 2000 operating plan for the office in October 1999, when EPA has a better idea of the resources needed to accomplish the responsibilities that the office will be tasked with during its first year of operation. The transition team’s director told us that decisions on budget allocations are particularly difficult to make at the present time due to the sensitive nature of notifying managers of EPA’s various components that they may lose funds and staff to the new office. Furthermore, EPA will soon need to prepare its budget for fiscal year 2001. According to EPA officials, the Office of the Chief Financial Officer will coordinate a planning strategy this spring that will lead to the fiscal year 2001 annual performance plan and proposed budget, which will be submitted to the Office of Management and Budget by September 1999. The idea of a centralized information office within EPA has been met with enthusiasm in many corners—not only by state regulators, but also by representatives of regulated industries, environmental advocacy groups, and others. Although the establishment of this office is seen as an important step in improving how EPA collects, manages, and disseminates information, the office will face many challenges, some of which have thwarted previous efforts by EPA to improve its information management activities. On the basis of our prior and ongoing work, we believe that the agency must address these challenges for the reorganization to significantly improve EPA’s information management activities. Among the most important of these challenges are (1) obtaining sufficient resources and expertise to address the complex information management issues facing the agency; (2) overcoming problems associated with EPA’s decentralized organizational structure, such as the lack of agencywide information dissemination policies; (3) balancing the demand for more data with calls from the states and regulated industries to reduce reporting burdens; and (4) working effectively with EPA’s counterparts in state government. The new organizational structure will offer EPA an opportunity to better coordinate and prioritize its information initiatives. The EPA Administrator and the senior-level officials charged with creating the new office have expressed their intentions to make fundamental improvements in how the agency uses information to carry out its mission to protect human health and the environment. They likewise recognize that the reorganization will raise a variety of complex information policy and technology issues. To address the significant challenges facing EPA, the new office will need significant resources and expertise. EPA anticipates that the new office will substantially improve the agency’s information management activities, rather than merely centralize existing efforts to address information management issues. Senior EPA officials responsible for creating the new office anticipate that the information office will need “purse strings control” over the agency’s resources for information management expenditures in order to implement its policies, data standards, procedures, and other decisions agencywide. For example, one official told us that the new office should be given veto authority over the development or modernization of data systems throughout EPA. To date, the focus of efforts to create the office has been on what the agency sees as the more pressing task of determining which organizational components and staff members should be transferred into the new office. While such decisions are clearly important, EPA also needs to determine whether its current information management resources, including staff expertise, are sufficient to enable the new office to achieve its goals. EPA will need to provide the new office with sufficient authority to overcome organizational obstacles to adopt agencywide information policies and procedures. As we reported last September, EPA has not yet developed policies and procedures to govern key aspects of its projects to disseminate information, nor has it developed standards to assess the data’s accuracy and mechanisms to determine and correct errors. Because EPA does not have agencywide polices regarding the dissemination of information, program offices have been making their own, sometimes conflicting decisions about the types of information to be released and the extent of explanations needed about how data should be interpreted. Likewise, although the agency has a quality assurance program, there is not yet a common understanding across the agency of what data quality means and how EPA and its state partners can most effectively ensure that the data used for decision-making and/or disseminated to the public is of high quality. To address such issues, EPA plans to create a Quality Board of senior managers within the new office in the summer of 1999. Although EPA acknowledges its need for agencywide policies governing information collection, management, and dissemination, it continues to operate in a decentralized fashion that heightens the difficulty of developing and implementing agencywide procedures. EPA’s offices have been given the responsibility and authority to develop and manage their own data systems for the nearly 30 years since the agency’s creation. Given this history, overcoming the potential resistance to centralized policies may be a serious challenge to the new information office. EPA and its state partners in implementing environmental programs have collected a wealth of environmental data under various statutory and regulatory authorities. However, important gaps in the data exist. For example, EPA has limited data that are based on (1) the monitoring of environmental conditions and (2) the exposures of humans to toxic pollutants. Furthermore, the human health and ecological effects of many pollutants are not well understood. EPA also needs comprehensive information on environmental conditions and their changes over time to identify problem areas that are emerging or that need additional regulatory action or other attention. In contrast to the need for more and better data is a call from states and regulated industries to reduce data management and reporting burdens. EPA has recently initiated some efforts in this regard. For example, an EPA/state information management workgroup looking into this issue has proposed an approach to assess environmental information and data reporting requirements based on the value of the information compared to the cost of collecting, managing, and reporting it. EPA has announced that in the coming months, its regional offices and the states will be exploring possibilities for reducing paperwork requirements for EPA’s programs, testing specific initiatives in consultation with EPA’s program offices, and establishing a clearinghouse of successful initiatives and pilot projects. However, overall reductions in reporting burdens have proved difficult to achieve. For example, in March 1996, we reported that while EPA was pursuing a paperwork reduction of 20 million hours, its overall paperwork burden was actually increasing because of changes in programs and other factors. The states and regulated industries have indicated that they will look to EPA’s new office to reduce the burden of reporting requirements. Although both EPA and the states have recognized the value in fostering a strong partnership concerning information management, they also recognize that this will be a challenging task both in terms of policy and technical issues. For example, the states vary significantly in terms of the data they need to manage their environmental programs, and such differences have complicated the efforts of EPA and the states to develop common standards to facilitate data sharing. The task is even more challenging given that EPA’s various information systems do not use common data standards. For example, an individual facility is not identified by the same code in different systems. Given that EPA depends on state regulatory agencies to collect much of the data it needs and to help ensure the quality of that data, EPA recognizes the need to work in a close partnership with the states on a wide variety of information management activities, including the creation of its new information office. Some partnerships have already been created. For example, EPA and the states are reviewing reporting burdens to identify areas in which the burden can be reduced or eliminated. Under another EPA initiative, the agency is working with states to create data standards so that environmental information from various EPA and state databases can be more readily shared. Representatives of state environmental agencies and the Environmental Council of the States have expressed their ideas and concerns about the role of EPA’s new information office and have frequently reminded EPA that they expect to share with EPA the responsibility for setting that office’s goals, priorities, and strategies. According to a Council official, the states have had more input to the development of the new EPA office than they typically have had in other major policy issues and the states view this change as an improvement in their relationship with EPA. Collecting and managing the data that EPA requires to manage its programs have been a major long-term challenge for the agency. The EPA Administrator’s recent decision to create a central information office to make fundamental agencywide improvements in data management activities is a step in the right direction. However, creating such an organization from disparate parts of the agency is a complex process and substantially improving and integrating EPA’s information systems will be difficult and likely require several years. To fully achieve EPA’s goals will require high priority within the agency, including the long-term appropriate resources and commitment of senior management. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Environmental Protection Agency's (EPA) information management initiatives, focusing on the: (1) status of EPA's efforts to create a central office responsible for information management, policy, and technology issues; and (2) major challenges that the new office needs to address to achieve success in collecting, using, and disseminating environmental information. GAO noted that: (1) EPA estimates that its central information office will be operational by the end of August 1999 and will have a staff of about 350 employees; (2) the office will address a broad range of information policy and technology issues, such as improving the accuracy of EPA's data, protecting the security of information that EPA disseminates over the Internet, developing better measures to assess environmental conditions, and reducing information collection and reporting burdens; (3) EPA recognizes the importance of developing an information plan showing the goals of the new office and the means by which they will be achieved but has not yet established milestones or target dates for completing such a plan; (4) although EPA has made progress in determining the organizational structure for the new office, it has not yet finalized decisions on the office's authorities, responsibilities, and budgetary needs; (5) EPA has not performed an analysis to determine the types and the skills of employees that will be needed to carry out the office's functions; (6) EPA officials told GAO that decisions on the office's authorities, responsibilities, budget, and staff will be made before the office is established in August 1999; (7) on the basis of GAO's prior and ongoing reviews of EPA's information management problems, GAO believes that the success of the new office depends on EPA addressing several key challenges as it develops an information plan, budget, and organizational structure for that office; and (8) most importantly, EPA needs to: (a) provide the office with the resources and the expertise necessary to solve the complex information management, policy, and technology problems facing EPA; (b) empower the office to overcome organizational challenges to adopting agencywide information policies and procedures; (c) balance EPA's need for data on health, the environment, and program outcomes with the call from the states and regulated industries to reduce their reporting burdens; and (d) work closely with its state partners to design and implement improved information management systems.
While Native American veterans are geographically dispersed throughout the United States, the West and South regions contain the majority of the Native American veteran population, according to Census data. Some Native American veterans are members of the 566 federally recognized tribes that are distinct, independent political communities that possess certain powers of self-government, which we refer to as tribal sovereignty. Specifically, federally recognized tribes have government-to-government relationships with the United States, and are eligible for certain funding and services provided by the United States. In addition, some Native American veterans are members of the more than 400 Indian groups that are not recognized by the federal government (which we refer to in this report as non–federally recognized tribes). Many—but not all—Native American veterans are dually eligible for health care services in VA and IHS. For example, a veteran who is a member of a non–federally recognized tribe may be eligible for VA health care services, but would not be eligible for IHS health care services. VA is charged with providing health care services to the nation’s veterans, and estimates that it will serve 6.3 million patients in fiscal year 2013. VA’s fiscal year 2012 budget for medical care was approximately $54 billion. The department provides health care services at VA-operated Veterans who facilities and through agreements with non-VA providers.served in the active military, naval or air service and who were discharged or released under conditions other than dishonorable are generally eligible for VA health care. IHS is charged with providing health care to the approximately 2.1 million eligible Native Americans. IHS’s fiscal year 2012 budget for medical care was approximately $3.9 billion. Similarly to VA, IHS provides health care services at IHS-operated facilities through direct care and pays for services from external providers through contract health services. In addition to IHS-operated facilities, some federally recognized tribes choose to operate their own health care facilities, which receive funding from IHS. Like their IHS-operated counterparts, tribally operated facilities provide direct care services and pay for contract health services. IHS also provides funding through grants and contracts to nonprofit urban Native American organizations through the Urban Indian Health program in order to provide health care services to Native Americans living in urban areas. In 2003, VA and IHS signed an MOU to facilitate collaborative efforts in serving Native American veterans eligible for health care in both systems. In 2010, the agencies developed a more detailed MOU to further these efforts. The 2010 MOU contains provisions related to several areas of collaboration, including actions related to the following: Joint contracts and purchasing agreements: Development of standard, preapproved language for inclusion of one agency into contracts and purchasing agreements developed by the other agency; and processes to share information about sharing opportunities in early planning stages. Sharing staff: Establishment of joint credentialing and privileging, sharing specialty services, and arranging for temporary assignment of IHS Public Health Service commissioned officers to VA. Electronic Health Record (EHR) access: Establishment of standard mechanisms for VA providers to access records in IHS and tribally operated facilities, and vice versa, for patients receiving care in both systems. Reimbursement: Development of payment and reimbursement policies and mechanisms to support care delivered to dually eligible Native American veterans. Executive Order 13175, issued on November 6, 2000, required federal agencies to establish regular and meaningful consultation and collaboration with Indian tribe officials in the development of federal policies that have tribal implications. IHS issued a tribal consultation policy in 2006 to formalize the requirement to seek consultation and participation by Indian tribes in policy development and program activities. According to the policy, IHS will consult with Indian tribes to the extent practicable and permitted by law before any action is taken that will significantly affect Indian tribes. In November 2009, a Presidential Memorandum directed federal agencies to develop plans, after consultation with Indian tribes and tribal officials, for implementing the policies and directives of Executive Order 13175. VA’s plan included development of a tribal consultation policy, which the agency released in February 2011. VA’s tribal consultation policy asserts that VA will establish meaningful consultation to develop, improve, or maintain partnerships with tribal communities. The policy states that consultation should be conducted before actions are taken but acknowledges there may not always be “sufficient time or resources to fully consult” on an issue. In past work we have reported on key practices to enhance and sustain interagency collaboration including agreeing on roles and responsibilities; establishing compatible policies, procedures, and other means to operate across agency boundaries; and developing mechanisms to monitor, evaluate, and report on results. Additionally, our past work has identified a range of mechanisms that the federal government uses to lead and implement interagency collaboration. We found that regardless of the mechanisms used, there are key actions the government can take, including (1) having clear goals; (2) ensuring relevant participants are included in collaboration; and (3) specifying the resources—human, information, technology, physical, and financial—needed to initiate or sustain the collaboration. We have also found in past work on leading public-sector organizations and agency strategic planning that it is important to (1) define clear missions and desired outcomes; (2) use performance measures that are tangible, measurable, and clearly related to goals to gauge progress; and (3) use performance information as a basis for decision making. Finally, internal control standards emphasize the importance of effective external communications that occur with groups that can have a serious effect on programs, projects, operations, and other activities, including budgeting and financing. VA and IHS have documented common goals in their MOU, created 12 workgroups that are tasked with developing strategies to address the goals of the MOU, and created a Joint Implementation Task Force to coordinate tasks, develop implementation policy, and develop performance metrics and timelines—actions that are consistent with those we have found enhance and sustain agency collaboration. However, most of the performance metrics developed by VA and IHS to monitor the implementation of the MOU need to be more clearly related to the goals of the MOU in order to allow the agencies to gauge progress toward MOU goals. Consistent with our past work on practices that can enhance and sustain collaboration, VA and IHS have defined common goals for implementing the MOU and developed specific strategies the agencies plan to take to achieve them. Table 1 summarizes the five goals in the 2010 MOU and selected strategies for implementing them. VA and IHS have created two mechanisms to implement the MOU— workgroups and a Joint Implementation Task Force. We have reported that MOUs are most effective when they are regularly updated and monitored, actions that can be achieved by workgroups and task forces. VA and IHS created 12 workgroups tasked with responsibility for implementing and developing strategies to address the goals of the MOU, such as interoperability of health information technology; developing payment and reimbursement agreements; and sharing of care processes, programs, and services. Each workgroup includes members from VA and IHS, a step that can foster mutual trust across diverse agency cultures and facilitate frequent communication across agencies to enhance shared understanding of collaboration goals, according to our previous work on interagency collaboration. According to VA and IHS officials, most of the workgroup members volunteered to serve on the workgroups and were self-selected, and VA officials told us that they have consulted with tribes on how to increase tribal participation in the workgroups. The agencies also told us that some workgroup members were asked to participate because of their subject-matter expertise. Goals established by each workgroup appear to be aligned with MOU goals. Specifically, all eight of the workgroups we interviewed described goals that were consistent with the MOU goals. workgroup we interviewed and provides a crosswalk between workgroup goals and the corresponding MOU goal or strategy. We did not interview 4 workgroups because they did not directly relate to our objectives: (1) Services and Benefits; (2) New Technologies; (3) Cultural Competency and Awareness; and (4) Emergency and Disaster Preparedness. VA and IHS created the Joint Implementation Task Force to oversee the overall implementation of the MOU. This task force comprises officials from both agencies including from the Office of the Secretary of Veterans Affairs, the IHS Chief Medical Officer, and the director of VA’s Office of Tribal Government Relations, and is scheduled to meet quarterly. It develops implementation policy and procedures for policy-related issues identified by the workgroups; creates performance metrics and timelines, evaluates progress; and compiles an annual report on progress in MOU implementation. Creating a mechanism, such as a task force, intended not only to address issues arising from potential incompatibility of standards and policies across agencies but also to monitor, evaluate, and report on MOU results, can help to facilitate collaboration, according to our previous work on interagency collaboration. The process developed by the Joint Implementation Task Force to monitor the implementation of the MOU includes obtaining data on three performance metrics; however, two of the three metrics do not allow the agencies to measure progress toward the MOU’s goals. Our previous work has found that successful performance metrics should be tangible and measureable, clearly aligned with specific goals, and demonstrate the degree to which desired results are achieved. Although all three of the performance metrics are tangible and measurable, only one is also clearly aligned with a specific goal and defined in a manner that would allow the agencies to adequately measure the degree to which desired results are achieved. The other two metrics are inadequate because their connection to a specific goal is not clear and they lack qualitative measures that would allow the agencies to measure the degree to which desired results are achieved. For example, one MOU goal is to increase access to and improve quality of health care services, but none of the metrics mention any targets specifically linked to increased access or improved quality of care. Another goal is to establish effective partnerships and sharing agreements among the agencies and the tribes in support of Native American veterans. Although one of the metrics appears to be related to this goal, in that it is focused on measuring the number of outreach activities that are a result of partnerships, it lacks measures to determine how well the outreach activities are meeting the goal of establishing effective partnerships or other potential goals to which the outreach may contribute, such as facilitating communication among VA, IHS, veterans, and tribally operated facilities. The metrics would therefore not enable VA and IHS to determine how well these specific goals are being achieved. Table 3 describes the performance metrics and performance measures and our evaluation of them. Using these metrics, the agencies have issued MOU progress reports, but the metrics included in the reports generally are not clearly tied specifically to the goals of the MOU, nor do they allow the agencies to determine how well MOU goals have been achieved. Leading public- sector organizations have found that metrics that are clearly linked to goals and allow determination of how well goals are achieved are key steps to becoming more results-oriented. For example: According to the agencies’ fiscal year 2011-2012 metrics report,Metric 1 (programs increased or enhanced as a result of the MOU), more than 15 programs were enhanced or increased as the result of the MOU, and 440 events and activities occurred that increased or enhanced the programs. The report then provides examples of programs that have been enhanced, such as a care coordination program in which a registered nurse “works with Indian Health, Tribal Programs, and other agencies and hospitals through direct meetings at various facilities to ensure communication and improved care.” However, the report does not always describe information that would allow the agencies to determine how well each activity contributes to meeting MOU goals. For instance, in the description of an enhanced care coordination program noted above, the report does not indicate how the agencies determined that communication has improved among participants. Absent this information, it is not clear how the agencies could draw conclusions about whether improved communication has actually been facilitated and therefore how well the activity contributed to meeting the MOU goal of promoting patient- centered collaboration and facilitating communication. According to the metrics report, for Metric 2 (outreach activities increased or enhanced as a result of MOU partnerships), eight types of activities were increased or enhanced. However, the report lists only seven types of outreach and does not include enough information to determine how well the outreach contributes to meeting MOU goals. For example, one outreach activity cited in the report, “Outreach to promote implementation of new technologies,” includes the activity “VA Office of Telehealth Services (OTS) Coordinator participated in Web-ex sessions with IHS on use of technology to improve patient care.” Although not stated in the report, this activity appears to help implement the MOU strategy of enhancing access through the development and implementation of new models of care using new technologies, including telehealth, related to the MOU goals of promoting patient-centered care and increasing access to care. However, while outreach activities are measurable and tangible, and might help to achieve goals of the MOU, the report does not state how the agencies will determine whether the sessions actually were effective in improving patient care or increasing access, information that is necessary to allow the agencies to tell how well the activity helps achieve the MOU goals. For each metric, the agencies report whether the activities “met the purpose of the MOU,” “met the intent of the MOU,” and whether the “level of VA-IHS-Tribal participation” was poor, fair, good, or excellent. While determining whether the agencies’ activities meet the purpose and intent of the MOU is a critical step, and obtaining tribal participation is consistent with MOU goals, the report does not describe how these determinations were made. Agency officials told us that these determinations were made subjectively by each workgroup while keeping in mind the goals and strategies in the MOU. The weaknesses we found in these performance metrics could limit the ability of VA and IHS managers to gauge progress and make decisions about whether to expand or modify programs or activities, because the agencies will not have information on how well programs are supporting MOU goals. VA and IHS officials told us that they developed these performance metrics because the initial performance metrics, drafted by the workgroups themselves and other VA and IHS staff, varied in quality. The three metrics and measures were intended to provide some simple, measurable ways for workgroups to report on their progress. However, they also acknowledged that there were weaknesses in the measures and told us that refining these performance metrics is a priority. According to the officials, they plan to revise workgroup metrics by April 2013 and on a continuous basis going forward. In doing so, they plan to consult subject-matter experts and existing VA and IHS performance metrics, for example, prevention of hospital admissions in home-based primary care programs. Mainly because of the large number of diverse tribal communities and tribal sovereignty, VA and IHS face unique challenges associated with coordinating and communicating to implement the MOU. VA and IHS have processes in place for consulting with tribes, but these measures fall short in several respects and do not ensure such consultation is effective. VA and IHS officials told us the large number (566) of federally recognized tribes and differing customs and policy-making structures present logistical challenges in widespread implementation of the MOU within tribal communities. For instance, according to some VA officials, in some tribes as a matter of protocol, an agency must be invited on tribal lands or be sponsored by a council member in order to address a tribal council. Such a policy could add administrative processes that might delay implementation and require greater sensitivity from agency officials, adding to the challenge of consulting with tribes. As another example, the title or position of the tribal person designated to make decisions regarding health care may differ from tribe to tribe, complicating the decision-making process among VA, IHS, and tribes. VA officials told us in some tribes, for example, a tribal leader may have several roles, only one of which is making decisions on health care, whereas in other tribes there may be a tribal health director whom the tribal leader has designated to manage health care in the tribal community. Potentially, these differences can affect the speed and degree at which collective decisions can be made. In addition, VA and IHS officials noted that tribal sovereignty further adds to the logistical complexity of the efforts of the agencies to implement the MOU. Tribal sovereignty includes the inherent right to govern and protect the health, safety, and welfare of tribal members. Indian tribes have a legal and political government-to-government relationship with the federal government, meaning federal agencies interact with tribes as governments, not as special interest groups or individuals. VA and IHS officials told us that because of tribal sovereignty, tribally operated facilities may choose whether or not to participate in a particular opportunity for collaboration related to the MOU, which makes it challenging to achieve some of the goals of the MOU. VA and IHS can inform tribes of an opportunity but cannot require them to participate. For example: In order to meet the MOU goal to establish standard mechanisms for access to electronic health record (EHR) information for shared patients, VA and IHS have coordinated to adapt their information technology systems to allow them both to participate in the eHealth Exchange, a national effort led by the Department of Health and Human Services for sharing EHR information. However, EHR workgroup members told us that some tribally operated facilities have opted to use an off-the-shelf product in place of the IHS system, which the workgroup members do not have the resources to support. In another instance, as a part of their efforts to meet the MOU goal to establish effective partnerships and sharing agreements, VA and IHS are working to implement VA’s Consolidated Mail Outpatient Pharmacy (CMOP) throughout IHS. Workgroup members assigned to these activities said they plan to implement the program in all IHS- operated facilities by spring 2013 but cannot require tribally operated facilities to participate. Some smaller tribal communities with more limited postal access are not interested in using the CMOP program, according to the workgroup members. VA and IHS communicate MOU-related information with the tribes through written correspondence, in-person meetings, and other steps, as is consistent with internal controls calling for effective external communications with groups that can have a serious effect on programs and other activities; however, according to tribal stakeholders we interviewed, these methods for consultation have not always met the needs of the tribal communities, and the agencies have acknowledged that effective consultation has been challenging. VA and IHS send written correspondence (known as “Dear Tribal Leader” letters) regarding the MOU to tribal communities. However, the agencies have acknowledged that because of the large and diverse nature of the tribes, they have struggled to reach the tribal member designated to make health care decisions with information about the MOU. Both VA officials and members of tribal communities told us that, because tribal leaders are not always the tribal person designated to make decisions regarding health care, the “Dear Tribal Leader” letters may not always make their way to tribal members designated to take action on health care matters. VA officials told us that their formal consultation is conducted with tribal leaders. However, these officials also noted that, in addition to the letters sent to tribal leaders, they have a network of contacts within each tribe that includes, among others, tribal health directors, and this network receives concurrent notice of communication with tribal leaders via conference calls, listservs, and newsletters. IHS officials said sometimes, in addition to the tribal leader, they may also send letters to, or otherwise communicate directly with, tribal health program directors if they know of them. However, they also noted they do not maintain a specific record— such as a listserv—of tribal health program directors. Without reaching the tribal members responsible for decision-making on healthcare matters, VA and IHS may not always be effectively communicating with tribes about the status of the MOU and its related activities nor be obtaining tribal feedback that is critical with respect to implementation of the MOU. Likewise, seven tribal stakeholders we spoke with noted similar concerns regarding the “Dear Tribal Leader” letters as VA and IHS. For example, one tribal stakeholder said letters should go to a specific person, such as a tribal health director, to ensure that the information is seen by the right people in a timely manner. It may take the tribes time to pass along letters sent only to tribal leaders to the tribal health director or other appropriate people, by which point any deadlines included in the correspondence could be missed. Once the information has reached the tribal leader, tribes bear the responsibility to ensure it is passed on to the appropriate audience in a timely manner. Another specific concern tribal stakeholders that we spoke with expressed relating to written correspondence was that the agencies sometimes use the letters to simply inform them of steps the agencies have taken without consulting the tribes, as called for by the agencies’ tribal consultation policies. For example, some tribal stakeholders said VA and IHS did not include them in the original development of the 2010 MOU, even though the goals and activities in the MOU could directly affect them. According to 10 of the tribal stakeholders we spoke with, tribes should have been included in developing the MOU, which addresses proposed plans, policies, and programmatic actions that may affect tribes. For example, the MOU seeks to improve delivery of health care by developing and implementing new models of care using new technologies, including telehealth services such as telepsychiatry. Instead, the agencies solicited tribal comments after the agencies had signed the MOU. According to two tribal stakeholders, the agencies were not responsive to the comments provided on the MOU. One stakeholder said their comments were not acknowledged upon receipt nor did IHS ever follow up on the issues raised by their comments. The stakeholder suggested IHS designate a point person to track feedback and ensure follow-up. VA and IHS officials told us that they did not hold tribal consultation meetings before the signing of the MOU because they viewed the MOU as an agency-to-agency agreement rather than as an agreement between the agencies and the various tribes. VA and IHS officials said they hold quarterly meetings with tribal communities and also attend events, such as conferences held by Native American interest organizations. Three tribal stakeholders told us that when the agencies have held consultation meetings, the meetings are not interactive enough—stating that agency officials speak for the majority of the time—and that VA does not provide enough information prior to these meetings. These tribal stakeholders said providing information ahead of time could allow tribes to better prepare for meetings, discuss issues as a tribe beforehand, and determine which tribal members should attend. If tribal officials with the authority and desire to work with VA and IHS do not receive needed information on opportunities because of an ineffective consultation process, local facility leadership may not have readily available access to information necessary to examine which collaborative opportunities are present, and thus VA and IHS may be hindered in their efforts to coordinate health care for Native American veterans. VA and IHS are undertaking other efforts designed to enhance consultation with tribes. These include the following steps: In January 2011, VA established the Office of Tribal Government Relations (OTGR) to serve as the point of contact for tribes. According to VA officials, this office conducted four consultation meetings in 2012 and employed five field staff to help manage communication with tribal communities and to work with IHS on local MOU implementation efforts. In February 2011, VA released the agency’s tribal consultation policy. VA officials said they are developing a report that will explain the process for evaluating comments from tribes and making decisions based on them. The officials expect the report to be released to the public in the spring of 2013. The agencies have made more local efforts to communicate with tribes, which have led to some success. For example, agency officials and tribal stakeholders noted that the workgroup assigned to implement MOU activities in Alaska used successful methods for working with tribes. The Alaska workgroup told us they cultivated a relationship with an Alaskan tribal health organization in order to get advice on the appropriate customs for consulting with individual tribes there. In addition, the workgroup said they scheduled consultation meetings in conjunction with other meetings, which would limit the amount of travel tribal community members would need to undertake. VA employees also took cultural awareness training, and VA officials visited Alaska to demonstrate the agency’s dedication to providing care to Native American veterans, which, according to the workgroup, led to buy-in from tribal communities. VA and Alaskan tribes have signed 26 reimbursement agreements. Some tribal stakeholders that we spoke with have acknowledged the steps taken by the agencies thus far as positive but in some cases expressed concerns regarding tribal consultation. In the case of the tribes working with the Alaska workgroup, one stakeholder praised VA’s efforts to work with tribal health organizations to communicate with tribes. In another example, two tribal stakeholders said they approved of OTGR’s establishment as an office dedicated to Native American veterans’ issues. However, four tribal stakeholders expressed concerns that, despite the creation of OTGR, VA still has not always been effective in its efforts to consult with tribes or be responsive to tribal input provided during consultation. For example, one stakeholder questioned whether consultation was done with every tribe and described VA’s consultation process as sporadic. This stakeholder’s concern implies that VA’s outreach efforts may not be systematically reaching all tribal communities. However, VA officials told us that, in addition to issuing notices in the Federal Register and Dear Tribal Leader letters, they have a systematic process of hosting training summits for tribes and scheduling regular conference calls and presentations to tribal leadership. In another instance, one tribal community member said OTGR lacks—and thus cannot disperse to tribes—the technical knowledge necessary for tribes to partner with VA on activities such as negotiating reimbursement agreements. VA officials noted that OTGR staff may not always be technical experts on a given topic but said they are able to identify those experts and play a key role in linking tribes with them. Coordination between VA and IHS is essential to ensuring that high- quality health care is provided to dually eligible Native American veterans. While the 2010 MOU includes common goals that should facilitate agency coordination, and the agencies have created workgroups tasked to implement the MOU, we found that a critical mechanism for monitoring the implementation of the MOU, the agreement’s performance metrics, has weaknesses. Specifically, the inadequacies we found in performance metrics could limit the agencies’ ability to measure progress towards MOU goals and ultimately make decisions about programs or activities. Overcoming the challenges related to working with a large number of diverse, sovereign tribes is also essential to successfully achieving the goals of the MOU. Although steps have been taken to consult with tribes regarding the MOU and related activities, consultation has not always been effective in assuring that the people designated to make health care decisions in each tribe are reached and tribes are included in planning and implementation efforts. Ineffective consultation with tribal communities could delay or limit potential VA, IHS, and tribal community partnerships to achieve the goals of the MOU and could hinder agency efforts to gain support for MOU activities and address the health care needs of Native American veterans. To ensure the health care needs of Native American veterans are addressed most efficiently and effectively, we recommend that the Secretary of Veterans Affairs and Secretary of Health and Human Services take the following two actions: As the agencies move forward with revising the MOU’s performance metrics and measures, ensure that the revised metrics and measures allow decision makers to gauge whether achievement of the metrics and measures supports attainment of MOU goals. Develop processes to better ensure that consultation with tribes is effective, including the following: A process to identify the appropriate tribal members with whom to communicate MOU-related information, which should include methods for keeping such identification up-to-date. A process to clearly outline and communicate to tribal communities the agencies’ response to tribal input, including any changes in policies and programs or other effects that result from incorporating tribal input. A process to establish timelines for releasing information to tribal communities to ensure they have enough time to review and provide input or, in the case of meetings, determine the appropriate tribal member to attend the event. We provided draft copies of this report to VA and the Department of Health and Human Services for review. Both agencies concurred with our recommendations. In addition, VA provided us with comments on the draft report, which we have reprinted in appendix I, as well as general and technical comments, which were incorporated in the draft as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Veterans Affairs; the Secretary of Health and Human Services; and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Gerardine Brennan, Assistant Director; Jennie Apter; Lori Fritz; Hannah Marston Minter; and Lisa Motley made key contributions to this report.
Native Americans who have served in the military may be eligible for health care services from both VA and IHS. To enhance health care access and the quality of care provided to Native American veterans, in 2010, these two agencies renewed and revised an MOU designed to improve their coordination and resource sharing related to serving these veterans. GAO was asked to examine how the agencies have implemented the MOU. This report examines: (1) the extent to which the agencies have established mechanisms through which the MOU can be implemented and monitored; and (2) key challenges the agencies face in implementing the MOU and the progress made in overcoming them. To conduct this work, GAO interviewed VA and IHS officials and reviewed agency documents and reports. GAO also obtained perspectives of tribal communities through attendance at two tribal conferences; interviews with tribal leaders and other tribal members, including veterans; and interviews with other stakeholders, such as health policy experts and consultants. The Department of Veterans Affairs (VA) and the Indian Health Service (IHS) have developed mechanisms to implement and monitor their memorandum of understanding (MOU); however, the performance metrics developed to assess its implementation do not adequately measure progress made toward its goals. VA and IHS have defined common goals for implementing the MOU and developed strategies to achieve them. They have also created two mechanisms to implement the MOU--12 workgroups with members from both agencies to address the goals of the MOU, and a Joint Implementation Task Force, comprised of VA and IHS officials, to oversee the MOU's implementation. These steps are consistent with practices that GAO has found enhance and sustain agency collaboration. The agencies have also developed three metrics aimed at measuring progress toward the MOU's goals. However, two of the three metrics are inadequate because their connection to any specific MOU goal is not clear and, while they include quantitative measures that tally the number of programs and activities increased or enhanced as a result of the MOU, they lack qualitative measures that would allow the agencies to assess the degree to which the desired results are achieved. The weaknesses in these metrics could limit the ability of VA and IHS managers to gauge progress and make decisions about whether to expand or modify their programs and activities. VA and IHS face unique challenges associated with consulting with a large number of diverse, sovereign tribes to implement the MOU, and lack fully effective processes to overcome these complexities. VA and IHS officials told us the large number (566 federally recognized tribes) and differing customs and policy-making structures present logistical challenges in widespread implementation of the MOU within tribal communities. They also told us that tribal sovereignty--tribes' inherent right to govern and protect the health, safety, and welfare of tribal members--adds further complexity because tribes may choose whether or not to participate in MOU-related activities. Consistent with internal controls, VA and IHS have processes in place to consult with tribes on MOU-related activities through written correspondence and in-person meetings. However, according to tribal stakeholders GAO spoke with, these processes are often ineffective and have not always met the needs of the tribes, and the agencies have acknowledged that effective consultation has been challenging. For example, one tribal community expressed concern that agency correspondence is not always timely because it is sent to tribal leaders who are sometimes not the tribal members designated to take action on health care matters. Similarly, some tribal stakeholders told GAO that the agencies have not been responsive to tribal input and that sometimes they simply inform tribes of steps they have taken without consulting them. VA and IHS have taken steps to improve consultation with tribes. For example, VA has established an Office of Tribal Government Relations, through which it is developing relationships with tribal leaders and other tribal stakeholders. Additionally, in Alaska, VA has been consulting with a tribal health organization for insight on reaching tribes. However, given the concerns raised by the tribal stakeholders GAO spoke with, further efforts may be needed to enhance tribal consultation to implement and achieve the goals of the MOU. GAO recommends that the agencies take steps to improve the performance metrics used to assess MOU implementation and to develop better processes to consult with tribes. VA and the Department of Health and Human Services agreed with these recommendations.
The federal government collects, generates, and uses large amounts of information in electronic form, from enormous geographic databases to individual e-mails. Much of that information can constitute official federal records, and agencies must have ways to manage such records. Under the Federal Records Act, each federal agency is required to make and preserve records that (1) document the organization, functions, policies, decisions, procedures, and essential transactions of the agency and (2) provide the information necessary to protect the legal and financial rights of the government and of persons directly affected by the agency’s activities. If these records are not effectively managed, individuals might lose access to benefits for which they are entitled, the government could be exposed to legal liabilities, and historical records of vital interest could be lost forever. In addition, agencies with poorly managed records risk increased costs when attempting to search their records in response to Freedom of Information Act requests or litigation-related discovery actions. Finally, without effective management of the documentation of government actions, the ability of the people to hold the government accountable is jeopardized. Effective records management is also an important tool for efficient government operation. Without adequate and readily accessible documentation, agencies may not have access to important operational information to make decisions and carry out their missions. Accordingly, to ensure that they have appropriate recordkeeping systems with which to manage and preserve their records, agencies are required to develop records management programs. These programs are intended, among other things, to provide for accurate and complete documentation of the policies and transactions of each federal agency, to control the quality and quantity of records they produce, and to provide for judicious preservation and disposal of federal records. Among the activities of a records management program are identifying records and sources of records and providing records management guidance, including agency-specific recordkeeping practices that establish what records need to be created in order to conduct agency business. Under the Federal Records Act and the regulations issued by NARA, records must be effectively managed throughout their life cycle, which includes records creation or receipt, maintenance and use, and disposition. Agencies create records to meet the business needs and legal responsibilities of federal programs and (to the extent known) the needs of internal and external stakeholders who may make secondary use of the records. To maintain and use the records created, agencies are to establish internal recordkeeping requirements for maintaining records, consistently apply these requirements, and establish systems that allow them to find records that they need. Disposition involves transferring records of permanent, historical value to NARA for archiving and destroying all other records that are no longer needed for agency operations. One key records management process is scheduling, the means by which NARA and agencies identify federal records and determine time frames for disposition. Creating records schedules involves identifying and inventorying records, appraising their value, determining whether they are temporary or permanent, and determining how long records should be kept before they are destroyed or turned over to NARA for archiving. For example, one general records schedule permits civilian agencies to destroy case files for merit promotions (2 years after the personnel action is completed, or after an audit by the Office of Personnel Management, whichever is sooner). No record may be destroyed or permanently transferred to NARA unless it has been scheduled, so the schedule is of critical importance. Without schedules, agencies would have no clear criteria for when to dispose of records and, to avoid disposing of them unlawfully, would have to maintain them indefinitely. Scheduling records, electronic or otherwise, requires agencies to invest time and resources to analyze the information that an agency receives, produces, and uses to fulfill its mission. Such an analysis allows an agency to set up processes and structures to associate records with schedules and other information (metadata) to help it find and use records during their useful lives and dispose of those no longer needed. Records schedules are based on content and are media-neutral; that is, electronic records are classified on the same basis—by content— as physical records. In addition, agencies are to compile inventories of their information systems, after which the agency is required to develop a schedule for the electronic records maintained in those systems. NARA also has responsibilities related to scheduling records. NARA works with agencies to help schedule records, and it must approve all agency records schedules. It also develops and maintains general records schedules covering records common to several or all agencies. According to NARA, records covered by general records schedules make up about a third of all federal records. For the other two thirds, NARA and the agencies must agree upon agency-specific records schedules. Under the Federal Records Act, NARA is given general oversight responsibilities for records management as well as general responsibilities for archiving—the preservation in the National Archives of the United States of permanent records documenting the activities of the government. Of the total number of federal records, less than 3 percent are permanent. (However, under the act and other statutes, some of the responsibilities for oversight over federal records management are divided across several agencies. Under the Federal Records Act, NARA shares a number of records management responsibilities and authorities with the General Services Administration (GSA). The Office of Management and Budget (OMB) also has records management oversight responsibilities under the Paperwork Reduction Act and the E- Government Act.) For records management, NARA is responsible for issuing guidance; working with agencies to implement effective controls over the creation, maintenance, and use of records in the conduct of agency business; providing oversight of agencies’ records management programs; approving the disposition (destruction or preservation) of records; and providing storage facilities for agency records. The act also gives NARA the responsibility for conducting inspections or surveys of agency records and records management programs. Historically, despite the requirements of the Federal Records Act, records management has received low priority within the federal government. As early as 1981, in a report entitled Federal Records Management: A History of Neglect, we stated that “persistent records management shortcomings” had been attributed to causes that included “lack of commitment by top management, emphasis on agency missions, and the low priority of records management.” Almost 30 years later, the priority problem has remained remarkably persistent. For instance, a 2001 study prepared for NARA by SRA International, Inc., on perceptions in the federal government with respect to records management, concluded that recordkeeping and records management in general receive low priority, as evidenced by lack of staff or budget resources, absence of up-to-date policies and procedures, lack of training, and lack of accountability. This assessment also concluded that although agencies were creating and maintaining records appropriately, most electronic records remained unscheduled, and records of historical value were not being identified and provided to NARA for archiving. In 2002, drawing on the 2001 study, we reported that the low priority given to records management programs was a factor in program weaknesses. We noted that records management is generally considered a “support” activity. Because support functions are typically the most dispensable in agencies, resources for and focus on these functions are often limited. In 2008, we reported on weaknesses in federal e-mail management at four agencies. The four agencies reviewed generally managed e- mail records through paper-based processes, rather than using electronic recordkeeping. (A transition to electronic recordkeeping was under way at one of the four agencies, and two had long-term plans to use electronic recordkeeping.) We attributed weaknesses in agency e-mail management (such as senior officials not conforming to regulations) to factors including insufficient training and oversight regarding recordkeeping practices (as well as the onerousness of handling large volumes of e-mail)—similar to the effects of low priority described by SRA. Accordingly, we recommended that agencies with weaknesses in oversight, policies, and practices develop and apply oversight practices, such as reviews and monitoring of records management training and practices, that would be adequate to ensure that policies were effective and that staff were adequately trained and were implementing policies appropriately. Further evidence of the persistence of the priority issue was provided in 2008, when NARA surveyed federal senior managers about their perception of records management. According to the survey, only 64 percent of managers saw records management as a useful tool for mitigating risk. In April 2010, NARA released a report on its first annual records management self-assessment, which analyzed responses to a survey sent in September 2009 to 245 federal cabinet-level agencies, agency components, and independent agencies. According to NARA, the survey results showed that almost 80 percent of agencies were at moderate to high risk of improper disposition of records. For example, the survey found that not all agencies had appropriate policies in place for handling e-mail, and that only a little over half of the responding agencies had training in place for high-level executives and political appointees on how to manage e-mail; this is consistent with the finding in our 2008 report on e-mail practices regarding insufficient training and oversight regarding recordkeeping practices. NARA rated almost half of the responding agencies (105 of 221) as high risk in the area of e-mail. NARA’s survey also indicated, among other things, that a large proportion of agencies have not scheduled existing systems that contain electronic records. In December 2005, NARA issued a bulletin requiring agencies to have NARA-approved records schedules for all records in existing electronic information systems by September 30, 2009. 27 percent of agencies responding to NARA’s September 2009 agency self-assessment survey indicated that fewer than half of their electronic systems were scheduled. Such large numbers of unscheduled systems are a problem for agencies because their records cannot legally be disposed of, with the consequences for increased cost and risk mentioned earlier. NARA concluded that the varying levels of agency compliance with its records management regulations and policies have implications for the government’s effectiveness and efficiency in conducting its business, protecting citizens’ rights, assuring government accountability, and preserving our national history. The Federal Records Act gave NARA responsibility for oversight of agency records management programs by, among other functions, making it responsible for conducting inspections or surveys of agencies’ records and records management programs and practices; conducting records management studies; and reporting the results of these activities to the Congress and OMB. We have made recommendations to NARA in previous reports that were aimed at improving NARA’s insight into the state of federal records management as a basis for determining where its attention is most needed. In 1999, in reporting on the substantial challenge of managing and preserving electronic records in an era of rapidly changing technology, we noted that NARA did not have governmentwide data on the electronic records management capabilities and programs of all federal agencies. Accordingly, we recommended that NARA conduct a governmentwide survey of these programs and use the information as input to its efforts to reengineer its business processes. However, instead of doing a governmentwide baseline assessment survey as we recommended, NARA planned to obtain information from a limited sample of agencies, stating that it would evaluate the need for such a survey later. In 2002, we reported that because NARA did not perform systematic inspections of agency records management, it did not have comprehensive information on implementation issues and areas where guidance needed strengthening. We noted that in 2000, NARA had suspended agency evaluations (inspections) because it considered that these reached only a few agencies, were often perceived negatively, and resulted in a list of records management problems that agencies then had to resolve on their own. However, we concluded that the new approach that NARA initiated (targeted assistance) did not provide systematic and comprehensive information for assessing progress over time. (Only agencies requesting assistance were evaluated, and the scope and focus of the assistance were determined not by NARA but by the requesting agency.) Accordingly, we recommended that it develop a strategy for conducting systematic inspections of agency records management programs to (1) periodically assess agency progress in improving records management programs and (2) evaluate the efficacy of NARA’s governmentwide guidance. In response to our recommendations, NARA devised a strategy for a comprehensive approach to improving agency records management that included inspections and identification of risks and priorities. Subsequently, it also developed an implementation plan that included undertaking agency inspections based on a risk-based model, government studies, or media reports. In 2008, we reported that under its oversight strategy, NARA had performed or sponsored six records management studies in the previous 5 years, but it had not conducted any inspections since 2000, because it used inspections only to address cases of the highest risk, and no recent cases met its criteria. In addition, NARA’s reporting to the Congress and OMB did not consistently provide evaluations of responses by federal agencies to its recommendations, as required, or details on records management problems or recommended practices that were discovered as a result of inspections, studies, or targeted assistance projects. Accordingly, we recommended that NARA develop and implement an oversight approach that provides adequate assurance that agencies are following NARA guidance, including both regular assessments of agency records and records management programs and reporting on these assessments. NARA agreed with our recommendations and devised a strategy that included annual self- assessment surveys, inspections, and reporting. It has now begun implementing that strategy, having released the results of its first governmentwide self-assessment survey, as mentioned earlier. As we have previously reported, electronic records pose major management challenges: their volume, their complexity, and the increasingly decentralized environment in which they are created. E- mail epitomizes the challenge, as it is not only voluminous and complex, but also ubiquitous. ● Huge volumes of electronic information are being created. Electronic information is increasingly being created in volumes that pose a significant technical challenge to our ability to organize it and make it accessible. An example of this growth is provided by the difference between the digital records of the George W. Bush administration and that of the Clinton administration: NARA has reported that the Bush administration transferred 77 terabytes of data to the Archives on leaving office, which was about 35 times the amount of data transferred by the Clinton administration. Another example is the Department of Energy’s National Energy Research Scientific Computing Center, which said that, as of January 2009, it had over 3.9 petabytes of data (that is, about 4,000,000,000,000,000 bytes) in over 66 million files and that the volume of data in storage doubles almost every year. ● Electronic records are complex. Electronic records have evolved from simple text-based files to complex digital objects that may contain embedded images (still and moving), sounds, hyperlinks, or spreadsheets with computational formulas. Some portions of electronic records, such as the content of dynamic Web pages, are created on the fly from databases and exist only during the viewing session. Others, such as e-mail, may contain multiple attachments, and they may be threaded (that is, related e-mail messages are linked into send–reply chains). They may depend heavily on context. For example, to understand the significance of an e-mail, we may need to know not only the identity but the position in the agency of the sender and recipients. (Was it sent by an executive or a low-level employee?) In addition, new technologies, such as blogs, wikis, tweets, and social media, continue to emerge, posing new challenges to records managers. ● Identification and classification of electronic records are difficult in a decentralized computing environment. The challenge of managing electronic records significantly increases with the decentralization of the computing environment. In the centralized environment of a mainframe computer, it is comparatively simple to identify, assess, and manage electronic records. However, in the decentralized environment of agencies’ office automation systems, every user can create electronic files of generally unstructured data that may be formal records and thus should be managed. Documents can be created on individuals’ desktop computers and stored on local hard drives. E-mail can come from outside the agency. In cases like these, the agency generally depends on the individual to identify the document or the e-mail as a record, and, through placing it in a recordkeeping system, associate it with its appropriate schedule, make it searchable and retrievable, and preserve it until it is due for disposal. As we reported in 2008, e-mail is especially problematic. E-mail embodies several major challenges to records management: ● It is unstructured data, and it can be about anything, or about several subjects in the same message, making it difficult to classify by content. ● There is a very large volume of it: one study estimates that a typical corporate user sends or receives around 110 messages a day. Further, there may be many copies of the same e-mail, which can increase storage requirements or require a means of determining which copy to keep. Keeping large numbers of messages potentially increases the time, effort, and expense needed to search for information in response to a business need or an outside inquiry, such as a Freedom of Information Act request. ● It is complex: e-mail records may have multiple attachments in a variety of formats, they may include formatting that is important for meaning, and they include information about senders, recipients, and time of sending. Recordkeeping systems must be able to capture all this information and must maintain the association between the e-mail and its attachment(s). ● Its relevance depends on context. It may be part of a message thread that is necessary to understand its content, or it may discuss other documents or issues that are not well identified. An e-mail that says “I agree. Let’s do it” may be about a major decision or about going to lunch next week. ● It may not be obvious who is responsible for identifying an e-mail as a record and at what point. NARA regulations require that both senders and recipients may be responsible for identifying records. However, an e-mail may have multiple recipients and be forwarded to still other recipients. As NARA has pointed out, the decision to move to electronic recordkeeping is inevitable, but as we and NARA have previously reported, implementing such systems requires that agencies commit the necessary resources for planning and implementation, including establishing a sound records management program as a basis. Further, automation will not, at least at the current state of the technology, solve the “end user problem”—relying on individual users to make sound record decisions. Nor will automation solve the problem of lack of priority, which, as our previous work has shown, is of long standing. However, several developments could lead to increased senior-level attention to records management: NARA’s use of public ratings as a spur to agency management, growing recognition of risks entailed in poor information and records management, the requirements and emphasis of the recent Open Government Directive, and the influence of congressional oversight. Senior management commitment, if followed through with effective implementation, could improve the governmentwide management of electronic and other records. Moving to electronic recordkeeping is not a simple or easy process. Agencies must balance the potential benefits against the costs of redesigning business processes and investing in technology. Our previous work has shown that such investments, like any information technology investment, require careful planning in the context of the specific agency’s circumstances, in addition to well- managed implementation. In 2007, a NARA study team examined the experiences of five federal agencies (including itself) with electronic records management applications, with a particular emphasis on how these organizations used these applications to manage e-mail. Among the major conclusions was that although the functionality of the software product itself is important, other factors are also crucial, such as agency culture and the quality of the records management program in place. With regard to e-mail in particular, the survey concluded that for some agencies, the volume of e-mail messages created and received may be too overwhelming to be managed at the desktop by thousands of employees across many sites using a records management application alone. A follow-up study in 2008 added that although a records management application offers compliant electronic recordkeeping, “it can be expensive to acquire, time consuming to prepare for and implement, requires user intervention to file records, and can be costly over the long haul for data migration and system upgrades.” NARA found that in most instances agencies had to work to overcome user resistance to using the system. This user challenge has led records management experts to believe that end users cannot be relied on to manage e-mail records, or indeed any other types of records. A recent Gartner study concluded that user-driven classification of records, especially e-mail, has failed and will continue to fail; a study by the Association for Information and Image Management (AIIM) stated “it is simply not plausible to expect all creators of records to perform accurate, manual declaration and classification.” According to Gartner, “What enterprises really need (and want) is a mechanism that automatically classifies messages by records management type … without user intervention.” At the time of writing (August 2007), Gartner described such technology as “in its infancy,” but expected it to mature rapidly because of high demand. This technology, automated records classification (sometimes called “autocategorization”), might help address the user problem. (The Air Force is currently working with autocategorization projects.) However, like other information technology, it requires resources for setup and maintenance to be effective, and it is not simple to implement. Further, according to AIIM, autocategorization might not work for an agency’s particular documents or file plan, and might not be sufficiently accurate or cost effective. Some proposals have been made to simplify the e-mail problem. Gartner recommends treating e-mail as a separate issue from general records management, perhaps by putting all e-mail in a single category of temporary records with a uniform retention period. Similarly, the Director of Litigation in NARA’s Office of General Counsel has suggested keeping all e-mail created by key senior officials (with some additional designations by agency components) as permanent and treating all the rest as temporary. Both proposals would make managing e-mail simpler, but could increase the risk that significant information will not be preserved. Raising the priority of records management has been and continues to be an uphill battle. As we have reported, government needs to prioritize the use of resources, and records management has not been a high priority. Further, records management can also be time- and resource-consuming and technically difficult to implement. NARA can influence this situation by providing effective guidance and assistance to agencies, as well as through its oversight and reporting activities. With its recently initiated annual self- assessment survey, NARA is responding to our earlier recommendations on oversight by beginning an effort to develop a comprehensive view of the state of federal records management as a basis for determining where its attention is most needed. Reporting the results of the survey, with scores for individual agencies and components, to the Congress, OMB, and the public is one way to help bring the records management issue to the attention of senior agency management. Another factor that could help raise awareness of the value of records management is the growing recognition of the risks of weak electronic records and information management, as a result of fear of potentially large costs to organizations that have to produce electronically stored information to respond to litigation, as well as well-publicized incidents of lost records, including e-mail. This recognition of risk is coupled with increased awareness of the value of organizations’ information assets; according to AIIM, the field of enterprise content management (which includes records management) has been accepted, driven by the need to control the content chaos that pervades local drives, file shares, email systems, and legacy document stores. As a result, according to an AIIM survey, the highest current priorities for ECM activity are electronic records management and managing e-mails as records. Further, recent Open Government initiatives, which emphasize the importance of making information available to the public for transparency and accountability, could be an additional impetus to addressing electronic records management. OMB’s Open Government Directive makes a direct link between open government and records management by requiring that each agency’s Open Government Plan include a link to a publicly available Web site that shows how the agency is meeting its existing records management requirements. More generally, the directive urges agencies to use modern technology to disseminate useful information. According to an Administration official, records management plays a crucial role in open government by ensuring accountability through proper documentation of government actions. Increased attention to information and records management could provide another spur encouraging agencies to devote resources to managing their electronic records more effectively. Finally, the priority that agencies give to addressing weaknesses may be increased by hearings such as this, which show that the Congress recognizes the importance of good records management for the efficient, effective, and accountable operations of government. In summary, federal records management has been given low priority for many years. However, the explosion of electronic information and records is an increasing risk to agencies, and could even become a drag on agencies’ ability to perform their missions if not brought under control. Raising visibility, as NARA is doing by publishing the results of its self-assessment survey, can raise the perception among senior agency officials of the importance of records management. Also significant is the push for Open Government, which, by heightening the importance of agencies’ providing information to the public, makes information a more central part of their missions and could help highlight the actual importance to agencies of actively managing their information. Strong indications from the Congress that records management needs more attention could also raise the priority among agency management. Mr. Chairman, this completes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have at this time. If you should have questions about this testimony, please contact me at (202) 512-6304 or [email protected]. Other major contributors include Barbara Collier, Lee McCracken, J. Michael Resser, and Glenn Spiegel. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Federal agencies are increasingly using electronic means to create, exchange, and store information, and in doing so, they frequently create federal records: that is, information, in whatever form, that documents government functions, activities, decisions, and other important transactions. As the volume of electronic information grows, so does the challenge of managing electronic records. Both federal agency heads and the National Archives and Records Administration (NARA) have responsibilities for managing federal records. As requested, after providing some context about records management in the federal government and the roles of federal agencies and NARA, this testimony describes the challenges of electronic records management and potential means of addressing these challenges. In preparing this testimony, GAO relied primarily on its previous work, supplemented by analysis of publicly available documents. Under the Federal Records Act, agencies are to manage the creation, maintenance, use, and disposition of records in order to achieve adequate and proper documentation of the policies and transactions of the federal government and effective and economical management of agency operations. If records are poorly managed, individuals might lose access to benefits for which they are entitled, the government could be exposed to legal liabilities, and records of historical interest could be lost forever. NARA is responsible, among other things, for providing records management guidance, assistance, and oversight. However, as GAO has previously reported, records management has received low priority within the federal government. Prior reports have identified persistent weaknesses in federal records management, including a lack of policies and training. GAO's most recent report, in 2008, found weaknesses in e-mail management at the four agencies reviewed due in part to insufficient oversight and training. This year, NARA published the results of its first annual agency records management self-assessment survey, indicating that almost 80 percent of agencies were at moderate to high risk of improper disposition of records. Electronic records are challenging to manage, especially as electronic information is being created in volumes that pose a significant technical challenge to the ability to organize and make it accessible. Further, electronic records range in complexity from simple text files to highly complex formats with embedded computational formulas and dynamic content, and new formats continue to be created. Finally, in a decentralized environment, it is difficult to ensure that records are properly identified and managed by end users on individual desktops (the "user challenge"). E-mail is particularly problematic, because it combines all these challenges and is ubiquitous. Technology alone cannot solve the problem without commitment from agencies. Electronic recordkeeping systems can be challenging to implement and can require considerable resources for planning and implementation, including establishing a sound records management program as a basis. In addition, the "user problem" is not yet solved, particularly for e-mail messages. Further, automation will not solve the problem of lack of priority, which is of long standing. However, several developments may lead to increased senior-level attention to records management: NARA's use of public ratings as a spur to agency management, growing recognition of risks entailed in poor information and records management, the requirements and emphasis of the recent Open Government Directive, and the influence of congressional oversight. Senior management commitment, if followed through with effective implementation, could improve the governmentwide management of electronic and other records.
Many firms of varying sizes make up the U.S. petroleum industry. While some firms engage in only limited activities within the industry, such as exploration for and production of crude oil and natural gas or refining crude oil and marketing petroleum products, fully vertically integrated oil companies participate in all aspects of the industry. Before the 1970s, major oil companies that were fully vertically integrated controlled the global network for supplying, pricing, and marketing crude oil. However, the structure of the world crude oil market has dramatically changed as a result of such factors as the nationalization of oil fields by oil-producing countries, the emergence of independent oil companies, and the evolution of futures and spot markets in the 1970s and 1980s. Since U.S. oil prices were deregulated in 1981, the price paid for crude oil in the United Stated has been largely determined in the world oil market, which is mostly influenced by global factors, especially supply decisions of the Organization of Petroleum Exporting Countries (OPEC) and world economic and political conditions. The United States currently imports over 60 percent of its crude oil supply. In contrast, the bulk of the gasoline used in the United States is produced domestically. In 2001, for example, gasoline refined in the United States accounted for over 90 percent of the total domestic gasoline consumption. Companies that supply gasoline to U.S. markets also post the domestic gasoline prices. Historically, the domestic petroleum market has been divided into five regions: the East Coast region, the Midwest region, the Gulf Coast region, the Rocky Mountain region, and the West Coast region. (See fig. 1.) These regions are known as Petroleum Administration for Defense Districts (PADDs). Proposed mergers in all industries, including the petroleum industry, are generally reviewed by federal antitrust authorities—including the Federal Trade Commission (FTC) and the Department of Justice (DOJ)—to assess the potential impact on market competition. According to FTC officials, FTC generally reviews proposed mergers involving the petroleum industry because of the agency’s expertise in that industry. FTC analyzes these mergers to determine if they would likely diminish competition in the relevant markets and result in harm, such as increased prices. To determine the potential effect of a merger on market competition, FTC evaluates how the merger would change the level of market concentration, among other things. Conceptually, the higher the concentration, the less competitive the market is and the more likely that firms can exert control over prices. The ability to maintain prices above competitive levels for a significant period of time is known as market power. According to the merger guidelines jointly issued by DOJ and FTC, market concentration as measured by HHI is ranked into three separate categories: a market with an HHI under 1,000 is considered to be unconcentrated; if HHI is between 1,000 and 1,800 the market is considered moderately concentrated; and if HHI is above 1,800, the market is considered highly concentrated. While concentration is an important aspect of market structure—the underlying economic and technical characteristics of an industry—other aspects of market structure that may be affected by mergers also play an important role in determining the level of competition in a market. These aspects include barriers to entry, which are market conditions that provide established sellers an advantage over potential new entrants in an industry, and vertical integration. Over 2,600 merger transactions occurred from 1991 through 2000 involving all three segments of the U.S. petroleum industry. Almost 85 percent of the mergers occurred in the upstream segment (exploration and production), while the downstream segment (refining and marketing of petroleum) accounted for about 13 percent, and the midstream segment (transportation) accounted for over 2 percent. The vast majority of the mergers—about 80 percent—involved one company’s purchase of a segment or asset of another company, while about 20 percent involved the acquisition of a company’s total assets by another so that the two became one company. Most of the mergers occurred in the second half of the decade, including those involving large partially or fully vertically integrated companies. Petroleum industry officials and experts we contacted cited several reasons for the industry’s wave of mergers in the 1990s, including achieving synergies, increasing growth and diversifying assets, and reducing costs. Economic literature indicates that enhancing market power is also sometimes a motive for mergers. Ultimately, these reasons mostly relate to companies’ desire to maximize profit or stock values. Mergers in the 1990s contributed to increases in market concentration in the downstream (refining and marketing) segment of the U.S. petroleum industry, while the upstream segment experienced little change. Overall, the refining market experienced increasing levels of market concentration (based on refinery capacity) in all five PADDs during the 1990s, especially during the latter part of the decade, but the levels as well as the changes of concentration varied geographically. In PADD I—the East Coast—the HHI for the refining market increased from 1136 in 1990 to 1819 in 2000, an increase of 683 (see fig. 2). Consequently, this market went from moderately concentrated to highly concentrated. Compared to other U.S. PADDs, a greater share of the gasoline consumed in PADD I comes from other supply sources—mostly from PADD III and imports—than within the PADD. Consequently, some industry officials and experts believe that the competitive impact of increased refiner concentration within the PADD could be mitigated. For PADD II (the Midwest), the refinery market concentration increased from 699 to 980 —an increase of 281—between 1990 and 2000. However, as figure 3 shows, this PADD’s refining market remained unconcentrated at the end of the decade. According to EIA’s data, as of 2001, the quantity of gasoline refined in PADD II was slightly less than the quantity consumed within the PADD. The refining market in PADD III (the Gulf Coast), like PADD II, was unconcentrated as of the end of 2000, although its HHI increased by 170— from 534 in 1990 to 704 in 2000 (see fig. 4). According to EIA’s data, much more gasoline is refined in PADD III than is consumed within the PADD, making PADD III the largest net exporter of gasoline to other parts of the United States. The HHI for the refining market in PADD IV—the Rocky Mountain region—where gasoline production and consumption are almost balanced—increased by 95 between 1990 and 2000. This increase changed the PADD’s refining market from 1029 in 1990 to 1124 in 2000, within the moderate level of market concentration (see fig. 5). The refining market’s HHI for PADD V—the West Coast—increased from 937 to 1267, an increase of 330, between 1990 and 2000 and changed the West Coast refining market, which produces most of the gasoline it consumes, from unconcentrated to moderately concentrated by the end of the decade (see fig. 6). We estimated a high and statistically significant degree of correlation between merger activity and the HHIs for refining in PADDs I, II, and V for 1991 through 2000. Specifically, the corresponding correlation numbers are 91 percent for PADD V (West Coast), 93 percent for PADD II (Midwest), and 80 percent for PADD I (East Coast). While mergers were positively correlated with refining HHIs in PADDs III and IV—the Gulf Coast and the Rocky Mountains—the estimated correlations were not statistically significant. In wholesale gasoline markets, market concentration increased broadly throughout the United States between 1994 and 2002. Specifically, we found that 46 states and the District of Columbia had moderately or highly concentrated markets by 2002, compared to 27 in 1994. Evidence from various sources indicates that, in addition to increasing market concentration, mergers also contributed to changes in other aspects of market structure in the U.S. petroleum industry that affect competition—specifically, vertical integration and barriers to entry. However, we could not quantify the extent of these changes because of a lack of relevant data. Vertical integration can conceptually have both pro- and anticompetitive effects. Based on anecdotal evidence and economic analyses by some industry experts, we determined that a number of mergers that have occurred since the 1990s have led to greater vertical integration in the U.S. petroleum industry, especially in the refining and marketing segment. For example, we identified eight mergers that occurred between 1995 and 2001 that might have enhanced the degree of vertical integration, particularly in the downstream segment. Concerning barriers to entry, our interviews with petroleum industry officials and experts provide evidence that mergers had some impact on the U.S. petroleum industry. Barriers to entry could have implications for market competition because companies that operate in concentrated industries with high barriers to entry are more likely to possess market power. Industry officials pointed out that large capital requirements and environmental regulations constitute barriers for potential new entrants into the U.S. refining business. For example, the officials indicated that a typical refinery could cost billions of dollars to build and that it may be difficult to obtain the necessary permits from the relevant state or local authorities. According to some petroleum industry officials that we interviewed, gasoline marketing in the United States has changed in two major ways since the 1990s. First, the availability of unbranded gasoline has decreased, partly due to mergers. Officials noted that unbranded gasoline is generally priced lower than branded. They generally attributed the decreased availability of unbranded gasoline to one or more of the following factors: There are now fewer independent refiners, who typically supply mostly unbranded gasoline. These refiners have been acquired by branded companies, have grown large enough to be considered a brand, or have simply closed down. Partially or fully vertically integrated oil companies have sold or mothballed some refineries. As a result, some of these companies now have only enough refinery capacity to supply their own branded needs, with little or no excess to sell as unbranded. Major branded refiners are managing their inventory more efficiently, ensuring that they produce only enough gasoline to meet their current branded needs. We could not quantify the extent of the decrease in the unbranded gasoline supply because the data required for such analyses do not exist. The second change identified by these officials is that refiners now prefer dealing with large distributors and retailers because they present a lower credit risk and because it is more efficient to sell a larger volume through fewer entities. Refiners manifest this preference by setting minimum volume requirements for gasoline purchases. These requirements have motivated further consolidation in the distributor and retail sectors, including the rise of hypermarkets. Our econometric modeling shows that the mergers we examined mostly led to higher wholesale gasoline prices in the second half of the 1990s. The majority of the eight specific mergers we examined—Ultramar Diamond Shamrock (UDS)-Total, Tosco-Unocal, Marathon-Ashland, Shell-Texaco I (Equilon), Shell-Texaco II (Motiva), BP-Amoco, Exxon-Mobil, and Marathon Ashland Petroleum (MAP)-UDS—resulted in higher prices of wholesale gasoline in the cities where the merging companies supplied gasoline before they merged. The effects of some of the mergers were inconclusive, especially for boutique fuels sold in the East Coast and Gulf Coast regions and in California. For the seven mergers that we modeled for conventional gasoline, five led to increased prices, especially the MAP-UDS and Exxon-Mobil mergers, where the increases generally exceeded 2 cents per gallon, on average. For the four mergers that we modeled for reformulated gasoline, two— Exxon-Mobil and Marathon-Ashland—led to increased prices of about 1 cent per gallon, on average. In contrast, the Shell-Texaco II (Motiva) merger led to price decreases of less than one-half cent per gallon, on average, for branded gasoline only. For the two mergers—Tosco-Unocal and Shell-Texaco I (Equilon)—that we modeled for gasoline used in California, known as California Air Resources Board (CARB) gasoline, only the Tosco-Unocal merger led to price increases. The increases were for branded gasoline only and exceeded 6 cents per gallon, on average. For market concentration, which captures the cumulative effects of mergers as well as other competitive factors, our econometric analysis shows that increased market concentration resulted in higher wholesale gasoline prices. Prices for conventional (non-boutique) gasoline, the dominant type of gasoline sold nationwide from 1994 through 2000, increased by less than one-half cent per gallon, on average, for branded and unbranded gasoline. The increases were larger in the West than in the East—the increases were between one-half cent and one cent per gallon in the West, and about one- quarter cent in the East (for branded gasoline only), on average. Price increases for boutique fuels sold in some parts of the East Coast and Gulf Coast regions and in California were larger compared to the increases for conventional gasoline. The wholesale prices increased by an average of about 1 cent per gallon for boutique fuel sold in the East Coast and Gulf Coast regions between 1995 and 2000, and by an average of over 7 cents per gallon in California between 1996 and 2000. Our analysis shows that wholesale gasoline prices were also affected by other factors included in the econometric models, including gasoline inventories relative to demand, supply disruptions in some parts of the Midwest and the West Coast, and refinery capacity utilization rates. For refinery capacity utilization rates, we found that prices were higher by about an average of one-tenth to two-tenths of 1 cent per gallon when utilization rates increased by 1 percent. We found that prices were higher because higher refinery capacity utilization rates leave little room for error in predicting short-run demand. During the period of our study, refinery capacity utilization rates at the national level averaged about 94 percent per week. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-3841. Key contributors to this testimony included Godwin Agbara, John A. Karikari, and Cynthia Norris. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Gasoline is subject to dramatic price swings. A multitude of factors affect U.S. gasoline markets, including world crude oil costs and limited refining capacity. Since the 1990s, another factor affecting U.S. gasoline markets has been a wave of mergers in the petroleum industry, several between large oil companies that had previously competed with each other. For example, in 1999, Exxon, the largest U.S. oil company, merged with Mobil, the second largest. This testimony is based primarily on Energy Markets: Effects of Mergers and Market Concentration in the U.S. Petroleum Industry ( GAO-04-96 , May 17, 2004). This report examined mergers in the industry from the 1990s through 2000, the changes in market concentration (the distribution of market shares among competing firms) and other factors affecting competition in the industry, how U.S. gasoline marketing has changed since the 1990s, and how mergers and market concentration in the industry have affected U.S. gasoline prices at the wholesale level. To address these issues, GAO purchased and analyzed a large body of data and developed state-of-the art econometric models for isolating the effects of eight specific mergers and increased market concentration on wholesale gasoline prices. Experts peer-reviewed GAO's analysis. Mergers have altered the structure of the U.S. petroleum industry, including the refining market. Over 2,600 mergers have occurred in the U.S. petroleum industry since the 1990s, mostly later in the period. Industry officials cited various reasons for the mergers, particularly the need for increased efficiency and cost savings. Economic literature also suggests that firms sometimes merge to enhance their ability to control prices. Partly because of the mergers, market concentration has increased in the industry, mostly in the downstream (refining and marketing) segment. For example, market concentration in refining increased from moderately to highly concentrated in the East Coast and from unconcentrated to moderately concentrated in the West Coast. Concentration in the wholesale gasoline market increased substantially from the mid-1990s so that by 2002, most states had either moderately or highly concentrated wholesale gasoline markets. Anecdotal evidence suggests that mergers also have changed other factors affecting competition, such as the ability of new firms to enter the market. Two major changes have occurred in U.S. gasoline marketing related to mergers, according to industry officials. First, the availability of generic gasoline, which is generally priced lower than branded gasoline, has decreased substantially. Second, refiners now prefer to deal with large distributors and retailers, which has motivated further consolidation in distributor and retail markets. Based on data from the mid-1990s through 2000, GAO's econometric analyses indicate that mergers and increased market concentration generally led to higher wholesale gasoline prices in the United States. Six of the eight mergers GAO modeled led to price increases, averaging about 1 cent to 2 cents per gallon. Increased market concentration, which reflects the cumulative effects of mergers and other competitive factors, also led to increased prices in most cases. For example, wholesale prices for boutique fuels sold in the East and Gulf Coasts--fuels supplied by fewer refiners than conventional gasoline--increased by about 1 cent per gallon, while prices for boutique fuels sold in California increased by over 7 cents per gallon. GAO also identified price increases of one-tenth of a cent to 7 cents that were caused by other factors included in the models, particularly low gasoline inventories relative to demand, supply disruptions in some regions, and high refinery capacity utilization rates. For example, we found that a 1 percent increase in refinery capacity utilization rates resulted in price increases of one-tenth to two-tenths of a cent per gallon. FTC disagreed with GAO's methodology and findings. However, GAO believes its analyses are sound.
Among the pre-Medicare population, the primary source of health insurance is private coverage. In the first half of 2012, nearly 69 percent of individuals in this population were privately insured. An additional 13 percent of individuals obtained coverage through government programs such as Medicaid. However, a significant portion—more than 18 percent—was uninsured. Previous research has demonstrated that individuals with health insurance coverage tend to be in better health than individuals without coverage. However, research regarding the extent to which having prior health insurance coverage affects spending and use of medical services after enrolling in Medicare has produced inconsistent results. For example, one group of researchers found that having prior insurance was linked to lower spending and lower rates of hospitalization after enrolling in Medicare, while another group of researchers found that having prior insurance had no effect on beneficiaries’ spending or rates of hospitalization after Medicare enrollment.researchers found, however, that beneficiaries without prior insurance were less likely to visit physician offices and more likely to visit hospital emergency and outpatient departments after enrolling in Medicare, which could indicate that beneficiaries without prior insurance continued to access the health care system differently after Medicare enrollment. This latter group of Subsequent commentary and analysis by both research groups suggests that the conflicting results may be primarily attributable to different definitions of prior insurance and different analytical approaches to control for differences in beneficiaries with and without prior insurance. The group that found that having prior insurance was linked to lower spending used a more rigorous definition of prior insurance based on a longitudinal assessment of insurance coverage before age 65 rather than a point-in- time assessment. This group included beneficiaries who were enrolled in Medicare, Medicaid, and other government health programs before age 65 in its analysis and used a statistical weighting methodology to control for the possibility of reverse causality between health status and insurance coverage. More specifically, some individuals may have experienced declining health before age 65 that led to loss of employment, loss of private insurance coverage, and subsequent enrollment in government health programs. The group that did not find that having prior insurance was linked to lower spending criticized the inclusion of these beneficiaries, noting that many individuals transition to government health programs before age 65 because of poor health, thereby resulting in an overestimate of the effect of having prior insurance on their Medicare spending after age 65. These researchers also criticized the statistical weighting methodology used to control for the possibility that beneficiaries entered these programs because of poor health, contending that the data used in the weighting methodology were not sufficiently detailed to adequately adjust for this possibility. Beneficiaries with prior continuous insurance were more likely than those without prior continuous insurance to report being in good health or better in the 6 years after Medicare enrollment. On average, the predicted probability of reporting being in good health or better in the first 2 years in Medicare was approximately 84 percent for beneficiaries with prior continuous insurance and approximately 79 percent for beneficiaries without prior continuous insurance. Although the predicted probabilities of beneficiaries who reported being in good health or better decreased over time for both those with and without prior continuous insurance, the percentage point difference increased slightly. In total, having prior continuous insurance raised the predicted probability that a beneficiary reported being in good health or better by nearly 6 percentage points in the first 6 years after Medicare enrollment. (See table 1.) According to previous research, there are reasons why Medicare beneficiaries with prior continuous insurance may be healthier than those without prior continuous insurance. Because of financial constraints, beneficiaries without prior continuous insurance may have difficulty accessing medical services that could help them improve their health before they enroll in Medicare. In addition, being uninsured before Medicare may have effects on beneficiaries’ health that remain for some time. For example, if a beneficiary without prior continuous insurance is diagnosed with diabetes and has inadequate access to care before Medicare, the beneficiary may develop complications that increase the risk for adverse health events for years to come, even after the diabetes is controlled. There were differences in Medicare spending and use of services between beneficiaries with and without prior continuous insurance. In particular, compared with beneficiaries without prior continuous insurance, beneficiaries with prior continuous insurance had significantly lower total spending during the first year in Medicare. Beneficiaries with prior continuous insurance had lower total program spending during the first year in Medicare compared with those without prior continuous insurance.Medicare, average predicted total spending for beneficiaries with and without prior continuous insurance was $4,390 and $6,733, respectively— a difference of $2,343, or 35 percent. Because the difference in total spending was the greatest during the first year in Medicare, it is possible that beneficiaries without prior continuous insurance had a pent-up demand for medical services in anticipation of coverage at age 65. Table 2 shows predicted spending, as well as the difference in predicted spending, during the first 5 years in Medicare for beneficiaries with and without prior continuous insurance. Beneficiaries with prior continuous insurance had more physician office visits during the first 5 years in Medicare than those without prior continuous insurance. Specifically, during the first 5 years in Medicare, the difference in the average predicted number of physician office visits between beneficiaries with and without prior continuous insurance ranged from 1.3 to 2.5, or 23 to 46 percent (see table 4). This utilization pattern may indicate that, even after Medicare enrollment, beneficiaries with prior continuous insurance continued to access medical services differently compared with those without prior continuous insurance. For example, beneficiaries with prior continuous insurance may have been more likely to have physician office visits before Medicare if their insurance covered these visits. According to our analyses, the number of institutional outpatient visits was similar for beneficiaries with and without prior continuous insurance. However, because we found that beneficiaries without prior continuous insurance had higher institutional outpatient spending, it is possible that they required more costly outpatient care than beneficiaries with prior continuous insurance. Previous research regarding the extent to which health insurance coverage prior to Medicare enrollment affects beneficiaries’ spending and use of services after enrollment has been inconclusive, possibly because of different definitions of prior insurance and different approaches for dealing with the potential for reverse causality between health status and health insurance coverage. Like researchers who did not find significant differences in Medicare spending between beneficiaries with and without prior insurance coverage, we excluded individuals who were enrolled in government health programs prior to age 65 from our analysis because of the possibility that they lost insurance coverage because of poor health, which could have resulted in an overestimate of the effect of having prior insurance on Medicare spending after age 65. However, like researchers who did find significant differences in Medicare spending between these groups, we used a more rigorous definition of prior insurance based on a longitudinal assessment of insurance coverage before age 65 rather than a single point in time. Using our methodology, we found significant differences in Medicare spending between beneficiaries with and without prior continuous insurance. This study adds to the body of evidence suggesting that beneficiaries with prior insurance used fewer or less costly medical services in Medicare compared with those without prior insurance, because they either were in better health or were accustomed to accessing medical services differently. In particular, we found that beneficiaries with prior continuous insurance were more likely than those without prior continuous insurance to report being in good health or better in the 6 years after Medicare enrollment. Additionally, we found that beneficiaries without prior continuous insurance had higher total and institutional outpatient spending but did not have higher spending for physician and other noninstitutional services, suggesting that they required more intensive medical services or that they were accustomed to visiting hospitals more than physician offices. This suggests that the extent to which individuals enroll in private insurance before age 65 has implications for beneficiaries’ health status and Medicare spending. We provided a draft of this report to the Department of Health and Human Services for review. In written comments, reproduced in appendix II, the department highlighted a key finding in our report that beneficiaries with prior insurance used fewer or less costly medical services in Medicare compared with those without prior insurance. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees and the Administrator of the Centers for Medicare & Medicaid Services (CMS). The report also will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This appendix describes the data and methods we used to address our research objectives. We used data from the Health and Retirement Study (HRS) and Medicare claims. HRS is a longitudinal panel study that surveys a representative sample of more than 26,000 Americans over the age of 50 every 2 years. We used a subset of HRS data from 1996 through 2010 to obtain information on beneficiaries’ health insurance coverage before Medicare, health status in Medicare, demographic characteristics, potential health risk factors, and diagnoses of health conditions. Because HRS data are survey data, these data were self- reported. We also used data from the Medicare Beneficiary Annual Summary Files and the Medicare Denominator Files from 2001 through 2010 to obtain information on Medicare spending and use of services. We worked with Acumen, LLC, to link beneficiaries’ HRS data with their Medicare data and to conduct statistical analyses of their spending and use of services. We assessed the reliability of the HRS and Medicare data and determined that the data were adequate for our purposes. We conducted our work from July 2011 to December 2013 in accordance with generally accepted government auditing standards. To determine whether Medicare beneficiaries had continuous health insurance coverage before Medicare, we used HRS data to develop a composite measure. We categorized beneficiaries as having prior continuous insurance if they reported receiving private insurance through their employer or their spouse’s employer in the three consecutive HRS surveys before Medicare enrollment at age 65—a period spanning approximately 6 years. To analyze beneficiaries’ health status in Medicare, we collapsed the HRS self-reported health status measure, which uses a scale from 1 (excellent) to 5 (poor), to two categories. We classified beneficiaries as being in good health or better if they reported being in excellent, very good, or good health. We also used HRS data to develop a set of independent variables for our analyses. Specifically, we used data on demographic characteristics (census division, education level, income, marital status, race, and sex), potential health risk factors (body mass index and smoking status), and ever having had a diagnosis of any of eight health conditions (arthritis, cancer, diabetes, heart problem, high blood pressure, lung problem, psychological problem, and stroke). To analyze beneficiaries’ spending and use of services, we used data from the Medicare Beneficiary Annual Summary Files. In particular, we obtained data on total, institutional outpatient, institutional inpatient, home health, and physician and other noninstitutional spending; institutional outpatient and physician office visits; and hospital stays. We also used enrollment data from the Beneficiary Annual Summary Files and Medicare Denominator Files to determine which beneficiaries to include in our analyses of spending and use of services. Because we used HRS data on beneficiaries’ self-reported health status that were collected about every 2 years, we defined three groups of beneficiaries, drawn from multiple survey years spanning 2001 through 2010, who were in (1) their first and second years of Medicare, (2) their third and fourth years of Medicare, and (3) their fifth and sixth years of Medicare (see fig. 1). This approach allowed us to measure the effect of prior continuous insurance on self-reported health status at three points in time after Medicare enrollment. Because we used Medicare data on beneficiaries’ program spending and use of services that were collected every year, we defined five groups of beneficiaries who were in their first, second, third, fourth, and fifth years of enrollment from 2001 through 2010 (see fig. 2). This approach allowed us to measure the effect of prior continuous insurance on spending and use of services for beneficiaries in each of the first 5 years of Medicare enrollment. For all of our analyses, we excluded beneficiaries from our study populations because of missing data and design and methodological issues. Specifically, we excluded beneficiaries who died before age 65; beneficiaries who were over age 65 as of January 31, 2001; beneficiaries who did not participate in all three HRS surveys in their pre-Medicare period; and beneficiaries who did not respond to relevant HRS questions about insurance during their pre-Medicare period. We excluded beneficiaries who were enrolled in Medicare or Medicaid before age 65 because their enrollment in these programs may have been due, at least in part, to poor health, which would indicate that their health status affected their insurance coverage rather than the other way around. We chose to exclude these beneficiaries to avoid overestimating the effects of having prior continuous insurance on health status, spending, and use of services. In addition, we excluded beneficiaries who reported receiving coverage from the Veterans Health Administration before age 65 because their Medicare spending and use of services might not fully represent their overall use of medical services. For our analyses of spending and use of services, we applied additional exclusion criteria to define our study populations. We excluded Medicare Advantage beneficiaries because they did not have fee-for-service data that could be linked to HRS data. In addition, we excluded beneficiaries who were not enrolled in both Medicare Parts A and B for all months they were alive during a given year because we did not have complete information on their spending and use of services. After the exclusions, the number of beneficiaries in our three study populations for our health status analyses ranged from 3,201 for the first group to 2,001 for the third group. The number of beneficiaries in our five study populations for our analyses of spending and use of services ranged from 1,592 for the first group to 1,152 for the fifth group. To examine the relationship between Medicare beneficiaries’ prior continuous insurance and their self-reported health status, we used logistic regression analysis. In particular, we modeled beneficiaries’ self- reported health status during three periods after Medicare enrollment. We also predicted probabilities of their reporting being in good health or better assuming both that they did and that they did not have prior continuous insurance. In all of our analyses, we included the following independent variables: prior continuous insurance, demographic characteristics, potential health risk factors, and ever having had a diagnosis of any of eight health conditions. See table 5 for an example of results from one of the three models that we conducted for our analyses of health status. To examine the relationship between Medicare beneficiaries’ prior continuous insurance and their spending and use of services, we used generalized linear models because our spending and service variables had skewed distributions and a high proportion of zero values. For example, for beneficiaries in their first year of Medicare enrollment, 30 percent of beneficiaries in our study population had no institutional outpatient visits and therefore no institutional outpatient spending. We modeled total, institutional outpatient, and physician and other noninstitutional spending and institutional outpatient and physician office visits for beneficiaries in each of the first 5 years of Medicare enrollment. We predicted values for these variables assuming both that beneficiaries did and that beneficiaries did not have prior continuous insurance. In all of our analyses, we included the following independent variables: prior continuous insurance, demographic characteristics, potential health risk factors, ever having had a diagnosis of any of eight health conditions, and the number of months a beneficiary was alive during the year. For our spending analyses, we used the price index from the Personal Health Care Expenditure component of the CMS National Health Expenditure Accounts to express all spending in 2011 dollars. This approach adjusted for inflation by removing the effects of health care price-level changes between 2001 and 2010. See table 6 for an example of results from 1 of the 25 models that we ran for our analyses of spending and use of services. Because we used multiple exclusion criteria to define our study populations, our results might not be representative of the entire Medicare population. To compare our study populations with the entire Medicare population, we examined certain characteristics of these populations— gender, race, and census division (see tables 7 and 8). We selected these characteristics because data on these characteristics were available in each of the data sources that we used. Because we only had access to Medicare Denominator File data for 2003 through 2010, we compared characteristics for beneficiaries in their first or second year of Medicare enrollment from 2003 through 2010. On the basis of this analysis, we determined that our study populations and the entire Medicare population were comparable. However, we noted small differences between the populations. For example, compared with the entire Medicare population, our study populations included slightly higher percentages of females. We excluded Medicare beneficiaries who were enrolled in Medicaid before age 65 from our primary analyses because their enrollment in this program may have been due, at least in part, to poor health. To determine the effect, if any, of removing these beneficiaries from our analyses, we conducted supplementary analyses of Medicare spending and use of services that included these beneficiaries. Results for most of the dependent variables (e.g., total spending, physician and other noninstitutional spending, physician office visits, and institutional outpatient visits) were similar to our original results. However, beneficiaries with prior continuous insurance only had lower institutional outpatient spending during the first year in Medicare, rather than during the first and second years in Medicare, when we included these beneficiaries. In addition to the contact listed above, Christine Brudevold, Assistant Director; George Bogart; David Grossman; Elizabeth T. Morrison; Aubrey Naffis; and Eric Wedum made key contributions to this report.
Nearly 7 million individuals aged 55 to 64--more than 18 percent of the pre-Medicare population--lacked health insurance coverage in the first half of 2012. Health insurance protects individuals from the risk of financial hardship when they need medical care, and uninsured individuals may refrain from seeking necessary care because of the cost. If they forgo medical care beforehand, these individuals may be in worse health and need costlier medical services after enrolling in Medicare compared to those with prior insurance. GAO was asked to review the effects of having prior health insurance coverage on Medicare beneficiaries. This report examines the health status, program spending, and use of services of Medicare beneficiaries with and without continuous health insurance coverage before Medicare enrollment. To examine the effects of beneficiaries' prior insurance coverage, GAO used data from the Health and Retirement Study and Medicare claims to conduct two types of multivariate analysis. GAO predicted probabilities of beneficiaries' reporting being in good health or better and values for program spending and beneficiaries' use of services. In comments on a draft of this report, the Department of Health and Human Services highlighted a key finding in GAO's report that beneficiaries with prior insurance used fewer or less costly medical services in Medicare compared with those without prior insurance. Beneficiaries with continuous health insurance coverage for approximately 6 years before enrolling in Medicare were more likely than those without prior continuous insurance to report being in good health or better during the first 6 years in Medicare. In particular, having prior continuous insurance raised the predicted probability that a beneficiary reported being in good health or better by nearly 6 percentage points during the first 6 years in Medicare. Beneficiaries with prior continuous insurance had lower total program spending during the first year in Medicare compared with those without prior continuous insurance. Specifically, during the first year in Medicare, beneficiaries with prior continuous insurance had approximately $2,300, or 35 percent, less in average predicted total spending than those without prior continuous insurance. Similarly, beneficiaries with prior continuous insurance had lower institutional outpatient spending--for example, spending for services provided in a hospital outpatient setting--during the first and second years in Medicare compared with those without prior continuous insurance. In contrast, physician and other noninstitutional spending--spending for services provided by physicians, clinical laboratories, free-standing ambulatory surgical centers, and other noninstitutional providers--were similar during the early years in Medicare for beneficiaries with and without prior continuous insurance. However, during the fourth and fifth years in Medicare, beneficiaries with prior continuous insurance had physician and other noninstitutional spending that was about 30 percent higher than beneficiaries without prior continuous insurance. Beneficiaries with prior continuous insurance had more physician office visits during the first 5 years in Medicare compared with those without prior continuous insurance. Specifically, during the first 5 years in Medicare, the difference in the average predicted number of physician office visits between beneficiaries with and without prior continuous insurance ranged from 1.3 to 2.5, or 23 to 46 percent. This utilization pattern may indicate that, even after Medicare enrollment, beneficiaries with prior continuous insurance continued to access medical services differently from those without prior continuous insurance. The number of institutional outpatient visits was similar for beneficiaries with and without prior continuous insurance for the first 5 years after Medicare enrollment. Taken together, GAO's results show that, consistent with those of some other researchers, beneficiaries with prior continuous insurance used fewer or less costly medical services compared with beneficiaries without such insurance during the early years in Medicare, because they either were in better health or were accustomed to accessing medical services differently. This suggests that the extent to which individuals enroll in private insurance before age 65 has implications for beneficiaries' health status and Medicare spending.
The military exchanges are nonappropriated fund activities that are established, controlled by, and operated for the benefit of DOD components. Their mission is to provide (1) authorized patrons with articles and services necessary for their health, comfort, and convenience and (2) DOD’s morale, welfare, and recreation (MWR) programs with a source of funding. In carrying out this dual mission, the exchanges operate retail stores, similar to department stores, and provide a host of other services and specialty stores, including furniture stores, florist shops, barber and beauty shops, optical shops, liquor stores, and fast-food restaurants. For fiscal year 1999, the exchange services had over $9 billion in sales. For the past several years, about 70 percent of the exchange services’ profits from sales were allocated to MWR activities and about 30 percent to new exchange facilities and related capital projects. Military exchange services’ food operations generated about $734 million in sales during fiscal year 1999, or about 8 percent of the exchanges’ total sales. The sales occurred at about 2,200 food outlets operated by three military exchanges—the Army and Air Force Exchange Service (AAFES), the Navy Exchange Service Command (NEXCOM), and the Marine Corps Community Services (MCCS). These outlets, located on military installations around the world, included name-brand, fast-food restaurants, in-house signature brand restaurants, and more generic food operations such as cafeterias and snack bars. As shown by figure 1, over 50 percent of the $734 million in sales came from name-brand, fast-food outlets. Modern name-brand, fast-food restaurants began to appear on military installations in the early 1980s. In 1984, the Burger King Corporation and AAFES signed a 20-year, master contract authorizing AAFES to construct and operate 185 Burger King restaurants around the world. Each Burger King restaurant has a separate contract consistent with the terms of the master contract. Also in 1984, NEXCOM awarded a 10-year, master contract to McDonald’s Corporation. This contract, which was recompeted and renewed in 1994, allowed McDonald’s to construct and operate restaurants at more than 40 locations around the world. Both Burger King and McDonald’s contracts will expire in 2004. Today, other national brands, such as Baskin Robbins, Kentucky Fried Chicken, Pizza Hut, Popeye’s Chicken & Biscuits, Subway, Taco Bell, and Wendy’s, can also be found on military installations. (See app. II for the number of facilities and sales totals for AAFES and NEXCOM fast-food operations.) Fast-food restaurants are generally categorized by their location, size, and/or physical characteristics. A free-standing restaurant is often referred to as a traditional or stand-alone restaurant—it is located in a separate, distinct building with signage and logos that clearly identify its brand. A restaurant located in a food court is often referred to as a non-traditional or in-line restaurant. Free-standing restaurants are usually larger in size and have higher sales volumes than those found in food courts. The Assistant Secretary of Defense (Force Management) is responsible for establishing uniform policies for armed services exchange operations. In that capacity, the Assistant Secretary issued a policy memorandum for name-brand, fast-food operations in January 1988. The policy memorandum, which responded to recommendations from the House Committee on Armed Services, was issued to control the proliferation of fast-food restaurants on military installations and avoid a “fast-food strip” effect, award business to American investors, and ensure that name-brand, fast-food prices on military installations in the United States were comparable to those in communities adjacent to the military installation. The memorandum stated that the policy would be strictly followed and any deviations had to be approved in writing by the Assistant Secretary. However, the memorandum provided no criteria for approving a deviation. In addition, the primary armed services regulations governing MWR activities and the exchange services give the secretaries of the military services a stake in prescribing and overseeing the activities that can operate on their facilities, including the method that will be used to operate fast-food restaurants. Our analysis of fiscal year 1998 and 1999 financial data showed that the indirect method of operating name-brand, hamburger restaurants provided greater profitability than the direct method. This was true regardless of whether the restaurants were grouped and analyzed by sales volume, restaurant type (stand-alone or part of a food court), or location (continental United States or overseas). We also projected that if new name-brand, hamburger restaurants were to be built, the indirect method would result in a higher return on investment over a 20-year period. In conducting our analyses, we found that the exchange services had appropriately considered the various types of costs of their fast-food operations, except for overhead. We included overhead costs and the cost of capital in our analyses. For fiscal years 1998 and 1999, NEXCOM’s profits on 64 indirectly operated hamburger restaurants were about 11.5 percent and 11.4 percent, respectively, when measured as a percentage of total sales. Over the same period, AAFES’ profits on 164 and 171 directly operated hamburger restaurants were 7.8 and 5.5 percent, respectively. Table 1 provides the results of our analysis. Although the table does not show major differences in results between fiscal years 1998 and 1999, we have several observations about the profitability of the two alternatives. For servicemember morale purposes, AAFES operates a number of unprofitable Burger King restaurants in remote locations and overseas. In fiscal year 1999, for example, 56 of its 171 Burger King restaurants lost a combined $2.7 million. Eliminating these 56 restaurants from our analysis showed that the remaining 115 restaurants had profits of 9.2 percent of restaurant sales—this compares more favorably with NEXCOM’s 11.4 percent for that year. AAFES’ overhead rates (4.9 percent and 5.1 percent of sales) were significantly higher than NEXCOM’s (0.5 percent and 0.6 percent of sales) because of the different operating method. AAFES’ overhead rates captured the numerous support activities needed to manage the large infrastructure, distribution network, and personnel associated with the exchange’s operations. NEXCOM, on the other hand, had only a small number of support personnel to oversee its contracts with McDonald’s. NEXCOM’s revenues under the indirect method were derived from one-time signing bonuses, minimum guarantees, and annual licensing fees as well as commissions on restaurant sales. Signing bonuses and licensing fees accounted for approximately 25 percent of the revenues. Neither AAFES nor NEXCOM had established overhead rates for its restaurant operations. Therefore, we used the AAFES exchangewide rates, which are the rates AAFES applied to its overall operations for each of the 2 fiscal years. In response to our review, NEXCOM computed overhead rates that showed its limited support costs for overseeing its food service contracts. AAFES applied a 10-percent cost of capital to its restaurant operations, which, when measured as a percentage of sales, was 2.9 and 3.0 percent, respectively, for fiscal years 1998 and 1999. Because NEXCOM relies on McDonald’s and its licensed operators to build and periodically renovate the restaurant facilities, it had no capital costs. AAFES’ restaurant sales decreased about 4 percent between fiscal year 1998 and 1999 while its operating costs, as a percentage of sales, increased about 2 percent. These changes did not appear to be related to the method used to operate the restaurants. Both AAFES and NEXCOM rely on the military services for certain real property maintenance activities, particularly for repairs to the exterior of the restaurant building and the surrounding property. The exchanges, however, did not show these costs, and they were not readily available from the military services. In addition, both exchanges received similar support, which would tend to mitigate the cost impact on relative profitability. Therefore, we did not include them in our analysis. As part of our analysis, we evaluated the profitability of both methods from several perspectives—by sales volume, restaurant type (free-standing or food court), and location (continental United States or overseas). Each analysis showed that the indirect method was more profitable. Profitability by Sales Volume: We arrayed AAFES’ directly operated and NEXCOM’s indirectly operated restaurants by annual sales volumes. As shown in table 2, the profitability of both types of restaurants improved as sales volumes increased. However, NEXCOM’s profits measured as a percentage of sales were higher than AAFES’ in all categories for both fiscal years. For some categories, such as sales over $500,000 but less than $1 million, they were more than twice as high. For example, in fiscal year 1998, AAFES’ restaurants had profits of 3.0 percent of sales while NEXCOM’s restaurants had profits of 6.6 percent of sales. AAFES’ profitability was negatively affected by a large number of smaller restaurants that lost money in fiscal years 1998 and 1999. In 1999, for example, 44 of its 97 restaurants with sales under $1 million lost money. As table 2 shows, AAFES’ restaurants with sales of $500,000 or less lost 3.1 percent of sales and 1.7 percent of sales for fiscal years 1998 and 1999, respectively. Profitability by Restaurant Type: Both AAFES and NEXCOM operate traditional or free-standing hamburger restaurants and smaller non-traditional or food court type restaurants. As shown in table 3, free-standing restaurants, which tend to have higher sales volumes, were more profitable than restaurants located in a food court. NEXCOM’s average profits, however, were higher in both fiscal years for each type of restaurant. Profitability by Location: About 60 percent of AAFES’ restaurants and 80 percent of NEXCOM’s restaurants are located within the continental United States. As shown in table 4, NEXCOM’s indirectly operated restaurants were more profitable, regardless of their location. The biggest difference in profitability, however, was in restaurants located outside the continental United States. In fiscal year 1999, for example, the 69 overseas restaurants operated directly by AAFES had profits that averaged about 2 percent of sales, or $21,000 per restaurant. Almost half of its overseas restaurants, which were located in various countries around the world, lost money in 1999. On the other hand, NEXCOM’s 12 overseas restaurants had average profits of about 14 percent of sales, or about $224,000 per restaurant. In addition to our profitability analysis of fiscal year 1998 and 1999 financial data, we performed a 20-year, cash flow analysis for a capital investment in a new name-brand, hamburger restaurant. This analysis showed that the indirect method would produce about twice the net cash flow, in current-year dollars, as the direct method. A recent Army consultant’s study reached a similar conclusion. A cash flow analysis is a technique that is sometimes used at the beginning of a project to assess investment alternatives or strategies. Net present value techniques can show, in today’s dollars, the relative net cash flow of various alternatives over a long period of time—in the case of our study, 20 years. Simply stated, net cash flow is the amount of dollars that is left after sales or revenues have offset expenses. In general, the greater the net cash flow for a particular investment, the greater the return on the investment. In conducting this analysis, we combined the fiscal year 1998 and 1999 data for both AAFES and NEXCOM and calculated average annual sales, net profit, and depreciation per restaurant for free-standing and food court restaurants. In conducting our analysis, we assumed that the financial data would remain constant over the 20-year period. We also made a number of assumptions about factors such as the frequency of renovations (which require incremental capital investments), inflation rates, and the exchange services’ cost of capital. Using the data and applying the assumptions, we discounted the restaurants’ projected cash flows for a 20-year period. The methodology we used for the analysis is discussed more thoroughly in appendix I. As indicated in table 5, our analysis shows that over a 20-year period the direct method has a significantly lower net cash flow, in today’s dollars, for both free-standing and food court restaurants than the indirect method. This is due primarily to the significant initial investment, shown as $1,025,000 for the free-standing restaurant and $375,000 for the food court restaurant, required to build and equip the facilities and the subsequent periodic capital improvements that are required about every 5 years by AAFES’ contract with Burger King. We also performed a number of other cash flow analyses using alternative assumptions (see app. I). While the bottom-line numbers changed somewhat, the overall results were generally the same. The Army, which plans to open name-brand, fast-food restaurants at some of its MWR activities, sponsored a study in April 2000 to determine which method—direct or indirect—would provide the greatest returns on its investment. This study, which was conducted by a consulting firm in the food and hospitality industry, used the net present value technique to project cash flows for five different name-brand food types—hamburgers, chicken, pizza, Mexican, and sandwiches. The study was based on sales, cost, overhead, and profit data from AAFES, NEXCOM, and industry sources. It applied the data to a 10-year investment period and concluded that the indirect method provided more cash flows for both free-standing and food court hamburger restaurants and generally was the best value for the Army’s MWR activities. Our analysis of AAFES and NEXCOM financial data showed that the exchanges had, with only one exception, considered the various types of costs associated with their fast-food operations. The one exception was overhead. Neither AAFES nor NEXCOM used overhead costs when determining the profits associated with its individual restaurants. At AAFES, we found that it did not include overhead costs when reporting profits from fast-food operations. Instead, it calculated and applied an exchangewide overhead rate to its total operations to determine exchangewide profits. In doing this, AAFES accumulated its general and administrative costs from local exchanges and regional and headquarters operations. Its overhead rates were 4.9 percent of sales and 5.1 percent of sales for fiscal years 1998 and 1999, respectively. We compared the rates with overhead rates of fast-food restaurants in the private sector and found that the AAFES rates were reasonable. We also reviewed the work completed by AAFES’ internal auditors that related to reviewing the exchangewide overhead rates. We concluded that the methodology used by the internal auditors to review the rates was reasonable. Before using the exchangewide rates for our analysis, however, we asked AAFES food service and financial management officials if they had an overhead rate for just fast-food operations. They responded in writing, as well as in several follow-up discussions on this topic, that they did not have such a rate. They told us that previous efforts to develop one had proved too difficult because of the way costs were accumulated and accounted for. Accordingly, we used the 4.9 percent and 5.1 percent rates in our analysis of 1998 and 1999 financial data. However, after completing our work at AAFES, representatives of AAFES informed us they had developed overhead rates specifically for name- brand, hamburger restaurants operating under the direct method. The rates were 3.3 percent for both fiscal years 1998 and 1999. We discussed the approach AAFES used to develop these new rates but were unable to readily assess their accuracy because the information AAFES provided was not sufficient to support the differences in overhead costs between food service direct operations and exchangewide operations. We assessed the reasonableness of the new rates by comparing them with the rates of eight food service companies. All of the companies were included in Fortune Magazine’s list of top 10 food service companies based on revenues. This comparison showed that the new rates were substantially lower than those used by the food service companies included in our analysis. Based on the results of this comparison, we did not use the new rates in our detailed analysis shown in table 1. However, if we had used the revised rates, the profitability of the direct method would remain less than the indirect method, but the differences would not be as great. At NEXCOM, we found that it had not considered overhead costs when assessing the financial results of its fast-food operations. As a result of our work, the exchange developed overhead rates for its indirect fast-food operations. The rates developed were 0.5 percent and 0.6 percent of the franchisees’ sales for fiscal years 1998 and 1999, respectively. We used these rates in our analyses. Before using the rates, however, we reviewed the methodology NEXCOM used to develop them. We found that the methodology and costs included in the overhead rates appeared reasonable. When compared to AAFES’ overhead rates, NEXCOM’s rates appeared small. This condition exists because the indirect method requires significantly fewer people to manage and oversee fast-food operations. For example, to support such operations, NEXCOM had to negotiate and oversee several contracts, while AAFES had to manage all of the operations of about 170 restaurants. Exchange officials identified a number of factors, other than profitability, that are important when deciding between the direct and indirect methods. As shown in table 6, we grouped these factors into six categories: financial risk, customer service, employment opportunities, management control, operational risk, and investment opportunities. The relative importance of individual factors might vary depending on the circumstances involved in selecting an operating method for a planned restaurant. However, neither exchange used a standard approach or methodology to determine their relative importance or to evaluate them along with profitability considerations. Indirect Method Minimizes Financial Risk: Under the indirect method, the name-brand, fast-food company builds the restaurants and assumes the financial risk of recovering its capital investments and operating at a profit. The exchange service, in this situation, has no capital investment and generally receives a commission on restaurant sales, regardless of whether the restaurant makes a profit. Under the direct method, the opposite is true. The exchange service provides the capital for building and periodically updating the restaurant, assumes all financial risks of operations, and generally pays the name-brand company an annual licensing fee and a royalty or commission on its annual sales. Both Methods Provide Customer Service: Both methods can be used domestically and overseas and require the exchange services to offer menus and prices equivalent to those in the private sector. Under the direct method, however, an exchange service can establish restaurants in less profitable, remote locations in order to boost military members’ morale. For example, AAFES officials told us that soon after the military was deployed to Bosnia for the peacekeeping mission, it opened three hamburger restaurants. It also opened 14 food service outlets in the Balkans and Kosovo, including 4 hamburger restaurants, and it has responded to many other emergencies or humanitarian deployments over the years. AAFES believes this would not have been possible under the indirect method because it did not believe name- brand companies would have been willing to operate in such potentially dangerous or unprofitable locations. In fact, NEXCOM officials provided us information showing that several restaurants operating under the indirect method at Navy installations had been closed during the last several years due to low sales. Nevertheless, NEXCOM officials believe the indirect method of operation adequately meets the deployment needs of the Navy, which are fundamentally different than those of the Air Force and the Army. Moreover, NEXCOM officials believe some of the name-brand companies that support its indirect operations are capable of providing emergency service as well as service to remote locations around the world. Another advantage of the direct method is that customers do not have to pay sales tax. This advantage provides varying degrees of savings to the customer, depending on the state and local taxes applicable at each location. Direct Method Provides Greater Employment Opportunities for Military Dependents: Both methods provide employment opportunities for military dependents. However, because AAFES has control over the hiring practices at its directly run restaurants, it gives military spouses and other family members employment preferences. According to AAFES, about half of its work force are military dependents, some of which have worked for years with AAFES as they have moved from one installation to another. These employees retain their benefits (medical, sick leave, etc.) when they move as long as they continue to work for AAFES. Direct Method Provides More Control Over Operations: Under the direct method, the exchange services have more control over operations. AAFES officials said that this control gives them the ability to (1) establish restaurant hours that best support military needs, (2) address customer service issues consistently throughout the AAFES restaurant network, and (3) support the Army and Air Force mission objectives, which involve deploying personnel in war zones or other remote areas throughout the world. With its large supply and distribution network and access to resources, it can respond quickly to emergency situations almost anywhere in the world. On the other hand, the indirect method requires practically no infrastructure because the name-brand company and/or its restaurant operator handles all construction and operating issues. NEXCOM personnel agreed that the indirect method gave the exchange limited operating control of its restaurants but did not think such control was necessary because it does not provide fast-food operations on ships. Instead, most of its restaurants are located at large seaports or bases in the United States and overseas and generally operate in a normal commercial environment. Indirect Method Limits Operational Risks: Under the indirect method, the food service company and its restaurant operators are responsible for achieving sales goals, procuring and managing product inventories, maintaining the physical plant and equipment, developing promotions and marketing strategies, planning and updating menus, managing the food preparation process (including controlling the size of food portions), hiring and training all personnel, and assuming losses associated with breakdowns in internal controls. In addition, restaurant operators assume the primary risk for workers’ compensation claims and litigation associated with such things as accidents, harm caused by products, and employee injuries. Under the direct method, these issues fall primarily with the exchange service. Indirect Method Promotes Private Sector Investment Opportunities: The indirect method provides investment opportunities for private sector citizens, who are likely to be members of the local business community, and reduces concerns about government encroachment into private-sector functions, one of the objectives of DOD’s 1988 fast- food policy. AAFES officials told us that requests to build name-brand, fast-food restaurants on a military installation have sometimes been denied because an existing franchise restaurant would have been adversely affected. The indirect method is also consistent with DOD’s more recent 1998 policy to consider public-private ventures as an alternative for enhancing business activities that support MWR programs. This policy, which involves the indirect method of operation, calls for the exchange services to consider public-private ventures as an alternative source to meet capital requirements that exceed $1 million. NEXCOM and AAFES officials told us that they need flexibility to determine how best to meet their mission objectives and satisfy the military services’ needs. Therefore, they are opposed to the adoption of a single method of operating name-brand, fast-food restaurants. Even NEXCOM officials, who use the indirect method to operate most of the exchange’s fast-food restaurants, said situations exist where the direct method makes more sense. Neither exchange, however, used a business case analysis that weighed the factors identified in table 6 with profitability considerations, which then led to a decision to choose a particular operating method. DOD’s policy on operating name-brand, fast-food restaurants established a preference for using the indirect method within the continental United States and the direct method overseas. The exchanges, however, have not always followed this policy because, in part, DOD has not provided guidance for evaluating operating methods or criteria for determining when a deviation from the policy might be justified. In general, each exchange service has adopted the operating method that it believes best fits its particular circumstances. While the exchanges may need flexibility to choose an operating method that best meets their mission requirements, DOD may be missing opportunities to reduce the exchanges’ operating risks and increase the amount of funds provided to MWR activities because DOD’s policy is not clear and department officials have provided limited management oversight. In early 1988, when DOD issued its current policy memorandum on name- brand, fast-food operations, its stated goals were to control the proliferation of fast-food restaurants on military installations, award business to American investors, and ensure that restaurants on military installations charged prices that were comparable to those in adjacent communities. To help achieve these goals, the policy expressed a “preference” for using the indirect method within the continental United States and the direct method overseas. The policy also stated that the requirements would be strictly followed and any deviations had to be approved in writing by the Assistant Secretary (Force Management). However, the policy memorandum provided no criteria for approving a deviation. The policy memorandum also stated that construction of fast-food restaurants would continue to be reviewed as part of the annual nonappropriated fund construction program. Officials in the Office of the Assistant Secretary told us that the policy is still the primary guidance for determining how name-brand, fast-food restaurants should be operated. They acknowledged that the policy lacks criteria for determining when a deviation from the policy should be approved. They also stated that, for the most part, they have not been actively involved in overseeing how well the exchanges were adhering to the policy. They pointed out, however, that, in June 1998, the Department issued a new instruction that bears on this issue. This instruction calls for the military secretaries and exchange services to consider public-private ventures as an alternative way of meeting capital requirements when the requirement exceeds $1 million. Each public-private venture is to be supported by an economic analysis. In addition, whenever a venture involves construction financed by the private sector, an overseas fast-food restaurant, or liabilities to the government in excess of $500,000, it is to be reviewed by DOD policy officials. These officials pointed out that they have taken a more active role in overseeing compliance with the instruction. Thus far, however, the instruction has had minimal application to the exchange services’ restaurant operations since contracts for most of these restaurants existed before the instruction was issued. DOD policy officials’ recent reviews have applied, for the most part, to public-private ventures associated with the military services’ MWR activities, rather than the military exchanges. Consequently, it is somewhat unclear if and how the instruction will affect the Department’s long-standing name-brand, fast-food policy. Since name-brand, fast-food restaurants first appeared on military installations in the 1980s, each exchange has tended to adopt an operating method that it believes best fits its overall mission objectives, operating philosophies, and access to capital resources. AAFES, for example, has a large support infrastructure, access to investment capital, and a long history of directly operating the majority of its business operations. It seldom deviates from the direct method, either within the United States or overseas. While this approach appears to be in conflict with DOD’s preference for using the indirect method in the continental United States, AAFES officials believe they are following DOD policy and congressional guidance. In explaining this situation, they noted that both DOD and the Congress had annually approved construction projects for directly operated restaurants in the United States. This was, in their view, an indication that AAFES had the flexibility and approval to deviate from the policy without asking for a formal waiver. NEXCOM, on the other hand, does not have a large support infrastructure and prefers not to invest capital in such restaurants. It has, therefore, adopted the indirect method of operating its restaurants, even in overseas locations. While exchange personnel told us that they generally prepared an analysis prior to building a new restaurant, the analysis focused primarily on what type and/or size of restaurant would best meet an installation’s requirements (free-standing or food court) and whether it would be profitable under the specific circumstances. It did not include an analysis of the relative benefits—including profitability and other factors such as those discussed in this report—of the direct and indirect methods of operating the restaurant. As a result, the exchanges are not conducting the type of business case analysis that we believe would help them select and justify the operating method that best balances restaurant profitability with other factors. The Department may have missed opportunities to increase profits for MWR activities because its policy for operating name-brand, fast-food restaurants has not been clear and policy implementation has not been subject to consistent management oversight. As a result, the exchange services have not always had a compelling reason to analyze the financial and operational benefits of the two operating methods. While our analyses clearly showed that the indirect method produced greater profitability in the recent past and has the potential to generate higher profits if new restaurants are built, other factors can be important when deciding on which method to use. Nevertheless, the exchanges do not systematically develop a business case analysis to justify an operating method—an analysis that considers profitability and other factors before deciding which operating method to use. To address this situation, DOD needs a clear policy—one that includes a standard approach or methodology for selecting a name-brand restaurant’s operating method. A rigorous financial analysis and consideration of other factors would be part of the methodology. In addition, the policy needs to specify criteria that will help DOD evaluate when deviations from any preferred method are justified. Finally, the policy needs to address how the Department’s new instruction on public-private ventures bears on its fast-food policy. Having a sound name-brand, fast-food policy is likely to become increasingly important to DOD because the exchange services’ contracts with major name-brand companies will expire in 2004 and the exchanges will have to decide which method will be used to continue providing name-brand fast food. To properly weigh profitability and other factors in selecting operating approaches for name-brand, fast-food operations on military installations, we are recommending that the Under Secretary of Defense (Personnel and Readiness), in conjunction with the secretaries of the military services, revise the Department’s name-brand, fast-food policy by incorporating a standard methodology that considers both profitability and other factors to be used by the exchange services to identify the most appropriate method for operating fast-food restaurants, including criteria to approve deviations from any preferred operating method specified in the revised policy guidance, and clarifying how the instruction on public-private ventures affects the policy. We also recommend that the exchange service commanders ensure that the standard methodology is used before they renew a restaurant contract or open a new restaurant. In commenting on a draft of this report, the Assistant Secretary of Defense (Force Management Policy) concurred with its conclusions and recommendations. The Assistant Secretary stated that the Department will revise its name-brand, fast-food policy by (1) including a methodology that evaluates both economic and noneconomic factors when selecting an operating method, (2) including criteria and procedures for approving waivers from using the preferred operating method, and (3) clarifying how its instruction on public-private ventures applies to its policy. DOD expects to issue updated policy by November 1, 2001. DOD’s comments are in appendix III. We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense (Personnel and Readiness); the Secretaries of the Air Force, the Army, and the Navy; the Commander, AAFES; the Commander, NEXCOM; the Director, Office of Management and Budget; and interested congressional committees and members. We will also make copies available to others upon request. If you or your staff have questions concerning this letter, please contact us on (202) 512-8412. Staff acknowledgments are listed in appendix IV. To develop an understanding of the military exchanges’ name-brand, fast-food operations, we reviewed the history of these operations in the Department of Defense (DOD). We met with management officials from the Office of the Under Secretary of Defense (Personnel and Readiness) responsible for DOD’s name-brand, fast-food policy and discussed the Department’s implementation of its policy. We also met with senior management officials from the Army and the Air Force responsible for food services that supported morale, welfare, and recreation (MWR) activities to obtain their views on the direct and indirect methods of operating fast-food restaurants. We reviewed applicable DOD policies and regulations, related policy memorandums, and reports related to exchange service fast-food operations. We met with senior executives and managers responsible for financial management and food services at the Army and Air Force Exchange Service (AAFES) and the Navy Exchange Service Command (NEXCOM) headquarters to discuss fast-food operations and review documentation, financial reports, internal and external audit reports, and contract data. To determine which method of operating name-brand, fast-food restaurants was more profitable, we obtained and analyzed detailed financial information from AAFES and NEXCOM for their fiscal year 1998 and 1999 name-brand, hamburger sales; associated costs and expenses; commissions; and related data. The financial data for these years was the most current data available at the time of our review. The data involved primarily Burger King restaurants operated by AAFES and McDonald’s restaurants operated by either McDonald’s Corporation or its licensed operators (also called concessionaires) under NEXCOM’s purview. Because hamburger sales represented over 50 percent of the exchanges’ name-brand, fast-food sales for these years, we primarily analyzed hamburger restaurants, which represented the largest segment of name-brand, fast-food sales and therefore provided a sound basis for comparing the direct and indirect methods of operation. We analyzed the overall profitability of the restaurants operated under each method. For the direct method, net profit represented a restaurant’s total revenues less its operating costs and overhead costs. Economic earnings represented net profit less the opportunity cost associated with invested capital—this is also known as the cost of capital. For the indirect method, net profit and economic earnings represented the revenues— comprised of sales commissions, signing bonuses, and licensing fees— received from the name-brand company less overhead costs. Operating costs and the cost of capital were not applicable to the indirect method. Besides assessing overall profitability, we also analyzed profitability by sales volume, restaurant type (free-standing and food court), and general location (continental United States or overseas) to isolate unique conditions that might exist under one of the methods of operation. In this report, our use of the terms “profit” or “profitability” refers to economic earnings for the direct method of operation and net profit for the indirect method of operation, which are expressed as a percentage of restaurant sales. To determine if the exchange services considered all of the costs of their name-brand, fast-food operations, we reviewed financial reports, general ledger balances, and other data provided to us by each exchange service. However, we did not verify the accuracy and reliability of the data submitted to us by AAFES and NEXCOM management and express no opinion on its reliability. Both exchanges are audited annually by independent public accountants. For the fiscal years we reviewed (1998 and 1999), NEXCOM received an unqualified opinion and AAFES received an “except for” qualified opinion on their financial statements. We read the audit opinions for each exchange service to determine if there were any material weaknesses that came to the auditors’ attention that would indicate the financial data were unreliable. Except for AAFES not recording the cost of a defined benefit pension plan in accordance with Statement on Financial Accounting Standard No. 87, Employer’s Accounting for Pensions, material weaknesses were not reported by the exchange services’ independent public accountants. We discussed this issue with AAFES management and concluded that it did not have a bearing on the financial data we used in our analysis. To determine the reasonableness of the exchanges’ overhead rates, we reviewed the methodologies the exchange services used to capture and allocate overhead costs. We met with management and internal audit representatives of each exchange service to review steps they had taken to validate the methodologies and costs included in the overhead rates. We also compared the exchange services’ overhead rates to rates reported by leading name-brand, food service companies in their annual financial reports. NEXCOM developed its overhead rate after we inquired about overhead costs related to name-brand fast foods. AAFES had corporatewide overhead rates of 4.9 and 5.1 percent of sales for 1998 and 1999, respectively, which we used in our financial analysis. Before using them, however, we met with AAFES financial management and food service officials to determine if the exchange service had or could develop a rate for food operations in general or for restaurants operating under the direct method. AAFES officials told us they were unable to develop an overhead rate for food operations. Subsequent to completing our work at AAFES, representatives of AAFES informed us they had developed new overhead rates specifically for name-brand, hamburger restaurants operating under the direct method. The rates were 3.3 percent for both fiscal years 1998 and 1999. We discussed the approach AAFES used to develop these rates. We also compared AAFES’ new overhead rates with the rates of eight food service companies that included a range of name-brand companies. All of the companies were included in Fortune Magazine’s list of top 10 food service companies based on revenues. We obtained these food service companies’ overhead rates from their published financial statements. This comparison showed that AAFES’ new rates of 3.3 percent were substantially lower than those used by the food service companies included in our analysis. Rates for these companies ranged from 3.8 percent to 11.8 percent, with the mid-range rate being about 6.3 percent. Based on the results of this comparison, we did not use AAFES’ new rates in our detailed analysis shown in table 1. However, if we had used the new rates, AAFES’ profitability as a percentage of sales would increase from 7.8 percent to 9.4 percent in fiscal year 1998, and from 5.5 percent to 7.3 percent in fiscal year 1999 – still less profitable than the indirect method. The cost of capital applies to AAFES because, under the direct method, it builds and equips its restaurants. When making a decision to build and operate a restaurant, an exchange needs to evaluate the costs of initial construction, initial equipment and fixtures, and subsequent scheduled renovations (to the extent they are known). These costs, generally referred to as capital costs, are usually paid for by AAFES through borrowing or through cash that is available from its profits. Also, other potential uses of the capital should be considered in the evaluation to ensure that committing capital to building the restaurant is a sound and defensible financial decision. These costs may include implicit costs, such as opportunity costs, that would not appear on an entity’s financial statements but should be considered when evaluating profitability and making capital investment decisions. The ability of a restaurant to recover these costs will depend on the expected profitability of the restaurant as well as financial and operational risks associated with the restaurant’s operations. Some companies and organizations, such as AAFES, establish a cost of capital rate, normally expressed as a percentage, to evaluate their existing and planned capital projects. AAFES used a 10-percent cost of capital during both fiscal years 1998 and 1999 and applied this rate to capital investment decisions. In other words, AAFES expects to earn at least 10 percent on its capital investments. With respect to assessing profits from its fast-food operations, AAFES also applied this rate to the average cost of its supply inventories and the undepreciated value (net book value) of its buildings, equipment, and subsequent improvements and replacements. In our profitability analysis, we applied a cost of capital charge to net profits to determine economic earnings. Our method of calculating the cost of capital charge was consistent with the method AAFES used. To test the reasonableness of AAFES’ 10-percent cost of capital rate, we calculated a cost of capital number that could apply to a private sector company in the food services industry. We used a standard approach, a weighted average cost of capital, found in corporate finance text books to calculate a cost of capital that would be appropriate for firms in the name-brand, hamburger industry. We also used financial data from Value Line Publishing for two major food service corporations, McDonald’s and Wendy’s, to make our calculation. Because the financial situation of every business entity will be different, we did not expect our calculation to produce the same rate that AAFES used, but we did want to assure ourselves that AAFES’ reported cost of capital was reasonable for firms in the fast-food, hamburger business. The cost of capital rate we calculated was close enough to AAFES’ to assure ourselves that it was appropriate to use it in our calculation. We also conducted a 20-year, net present value analysis of future cash flows for a capital investment in a new name-brand, fast-food hamburger restaurant for both methods. Our analysis was based on fiscal year 1998 and 1999 sales and cost data provided by the exchange services. We applied an investment planning tool, called net present value, which measures both the magnitude and timing of projected cash flows and discounts the expected annual cash flows by applying the time value of money to reflect their value today. As a result, the analysis shows, in today’s dollars, the financial return that an investment in such a restaurant operated under each method is expected to contribute to an exchange service’s profits. We chose 20 years because this is generally the useful life of the facilities and equipment. We calculated a per restaurant average for sales and cost of operations. For the direct method, we added depreciation expenses to net profit to arrive at the annual positive cash flow. We included depreciation in the cash flow because, although it is an expense that is considered in arriving at net income, it does not represent an outlay of cash. The net present value technique also calls for depreciation to be included in the cash flows. We used initial construction and equipment costs provided by AAFES. The required incremental capital investments were based on historical data also provided to us by AAFES. We conducted several analyses using the net present value technique. First, we combined the fiscal year 1998 and 1999 sales data obtained from each exchange service and calculated per restaurant average sales over the 2-year period. This analysis is presented in the body of the report. We also conducted several other analyses. They included analyses of (1) fiscal year 1998 data, (2) fiscal year 1999 data, and (3) a pro forma analysis that used equivalent sales for each exchange. The pro forma analysis was intended to neutralize the difference in sales volumes of AAFES and NEXCOM. Our analysis was also based on a number of assumptions. For example, we assumed that combining or averaging 1998 and 1999 financial data would be representative of sales and costs for each year in the 20-year period. We also used a facilities and equipment renovation cycle of every 5 years, which is consistent with the information provided by AAFES and NEXCOM. For each method, we analyzed free-standing and food court restaurants separately because of significant differences in their capital costs, sales volumes, and cash flows. The discount rate we used to calculate the net present value figures was based on AAFES’ cost of capital, which was 10 percent. We needed to use a real cost of capital so we adjusted AFFES’ cost of capital by subtracting projected future inflation to derive a real cost of capital. We used the March 2001 Blue Chip Economic Indicators, which are averages of the projections of many major economic forecasters, to derive a long-term inflation forecast of the Consumer Price Index (for all urban consumers). The long-range forecast was about 2.5 percent. The Congressional Budget Office and the Office of Management and Budget were also forecasting around 2.5 percent in their latest long-range projections for this price index. We subtracted the long-term inflation forecast of 2.5 percent from AFFES’ 10 percent cost of capital to derive a real cost of capital of 7.5 percent, which we used in our cash flow analysis. We also did a sensitivity analysis for the inflation forecast with two other scenarios to see if this would change our results. We assumed the inflation rate could be as low as 2 percent and as high as 3 percent, which would change the real cost of capital to 8 percent and 7 percent, respectively. Under these two scenarios, our conclusions did not change. We also projected the results beyond 20 years to determine when AAFES’ total net cash flows for a new name-brand, hamburger restaurant would break-even with and begin to exceed NEXCOM’s. We knew this might eventually happen because AAFES’ annual net cash flows exceeded NEXCOM’s, except in the years that involved additional capital investment for required renovations. This analysis showed that, before the annual net cash flows were discounted, AAFES’ cash flows would not begin to exceed NEXCOM’s until after the 35th year of operation for a food court restaurant and after the 80th year for a free-standing restaurant. If the cash flows were discounted, it would take longer for AAFES’ net cash flows to exceed NEXCOM’s. To determine if a single method of operating name-brand, fast-food restaurants would be more beneficial to DOD when factors other than profitability were considered, we obtained documentation related to this issue from the exchange services. We also interviewed officials in the Office of the Under Secretary of Defense (Personnel and Readiness) and exchange service representatives to obtain their views on this subject. We categorized DOD officials’ written and oral responses under the general topics of financial risk, customer service, employment opportunities, management control, operational risk, and investment opportunities. We also met with officials of the Marine Corps Community Services office, which manages all MWR activities for the Marine Corps, to obtain their views on name-brand, fast-food operations. We also obtained documentation related to the number of fast-food restaurants located on Marine Corps installations and their sales and costs for fiscal years 1998 and 1999. Although we limited our analysis to AAFES and NEXCOM, we did use some of the Marine Corps data in the background section of this report. Our methodology has some limitations. First, the financial analysis was based on historical data that may or may not represent future market conditions, operating efficiencies, or the way name-brand, fast-food operations will be carried out in the future. Second, our analysis did not assess the overall tax implications of using the direct and indirect methods. Presumably, the indirect method would provide tax revenues to the government because concessionaires’ profits are subject to federal taxes and the direct method would also provide some tax revenues because royalties paid by an exchange service to the franchiser would also be taxable. Lastly, our financial analysis considered only one food concept, hamburgers, and may not be appropriate to other food concepts such as chicken and pizza. Our work was performed at the Office of Force Management Policy, Undersecretary of Defense (Personnel and Readiness) in Washington, D.C.; AAFES headquarters in Dallas, Texas; NEXCOM headquarters in Virginia Beach, Virginia; the Food and Hospitality Branch, Marine Corps Community Services, United States Marine Corps at Quantico, Virginia; the Army Community and Family Support Center in Alexandria, Virginia; and the Air Force Combat Support and Community Services Office, in Washington, D.C. We also met with representatives of the Burger King Corporation located in Miami, Florida, and McDonald’s Corporation located in Oak Brook, Illinois. We performed our work from September 2000 through May 2001 in accordance with generally accepted government auditing standards. Appendix II: Fiscal Year 1999 Inventory of AAFES and NEXCOM Fast-Food Restaurants Data was not available from NEXCOM. NEXCOM officials told us that this number is likely insignificant. Cherry Clipper, Eric Essig, Cleggett Funkhouser, James Fuquay, James Hatcher, Charles Perdue, Bob Preston, Jerry Thompson, and John Van Schaik made key contributions to this report. This glossary is provided for reader convenience in understanding terms as they are used and applied in this report, not as authoritative or complete definitions. A percentage paid on gross sales either as a flat percentage rate or a graduated rate based on sales brackets identified in the contract between the exchange and the franchise. A food service provided under contract to provide any segment of food service, either branded or non-branded, at a given installation in a permanent structure or temporary unit (i.e., mobile unit or kiosk). The opportunity cost (or economic cost) associated with alternative uses for invested capital of comparable risk. Includes funds invested in buildings, equipment (including periodic renovations and upgrades) and inventory. The operation of either non-branded or branded food service staffed by an exchange service’s direct hire associates. The exchange is responsible for providing/building and maintaining its own facilities, inventory, equipment, utilities, financial records, and personnel. Method of measuring the cash inflows and outflows of a capital investment or project as if the flows occurred at a single point in time so that they can be appropriately compared. Because the method considers the time value of money, it is usually the best method to use for evaluating long-term investment decisions. Net profit less the opportunity cost, also referred to as the cost of capital, associated with invested capital. A restaurant chain, either nationally or regionally recognized, providing a standardized system of policies, procedures, marketing/advertising schemes, logos, trademark, source of supply, source of equipment, and access to the franchise contracts. The operation of either non-branded or branded food service by a concessionaire or third-party contractor via a contract with an exchange service. A term for the legally binding agreement between a franchisee and a franchiser. An amount, usually paid on a per site basis, for the right to operate a concession at awarded site(s). The fee is remitted to the exchange service prior to the sites’ or facilities’ availability/operational date. Refers to a food service concept that is national (more than 10 states), regional (less than 10 states), or in-house (operated only with a given company’s units). A nationally recognized fast-food restaurant chain that operates in more than 10 states. The dollars that are left after sales or revenues have offset expenses. The dollars can be expressed at current value or at a discounted value, if the time value of money is considered. A discounted cash flow technique that calculates the expected net monetary gain or loss from a project by discounting all expected future cash inflows and outflows to the present point in time, using a specified rate of return. Total sales/revenues less operating costs and overhead costs. The cost of goods sold and operating expenses, including depreciation. Refers to economic earnings for the direct method of operation or net profit for the indirect method of operation and is expressed as a percentage of restaurant sales. Under the direct method, profitability represents a restaurant’s total revenues less its operating costs, overhead costs, and the opportunity costs associated with invested capital (the cost of capital). Under the indirect method, profitability represents the revenues (sales commissions, signing bonuses, and licensing fees) received from the name-brand company less the exchange service’s overhead costs. An agreement between a DOD nonappropriated fund activity, such as an exchange service, and a non-federal entity under which the non-federal entity provides goods, services, or facilities to authorized MWR activities and exchange patrons. The non-federal entity may provide a portion or all of the financing, design, construction, equipment, and staffing associated with the activity. Under the direct method, revenue includes restaurant sales plus other income. Other income is primarily the proceeds from selling surplus equipment and additional revenue realized in overseas locations from foreign currency conversions at the point of sale. Revenues under the indirect method are sales commissions based on restaurant sales plus licensing fees and signing bonuses. Gross restaurant or food sales less all applicable taxes and coupon redemptions recorded at the point of sale. A lump sum payment made to an exchange service by a concessionaire or a third-party contractor at the time a contract is signed.
The military exchange services operate a wide range of retail activities, such as department stores, florist shops, barber and beauty shops, gas stations, and restaurants. Hamburger restaurants represent a major segment of the exchange services' name-brand, fast-food sales. The exchange services use either a direct or an indirect method to operate these restaurants. Under the direct method, the exchange service enters into a franchise agreement with a name-brand company to sell its product on a military installation. As the franchisee, the exchange service builds and operates the restaurant and directly employs and trains the personnel. In turn, the exchange service receives all of the revenues and profits and usually pays the company a licensing fee plus a percentage of the restaurant's sales. Under the indirect method, the exchange service contracts with a name-brand company that, in turn, builds the restaurant and either operates it as a company restaurant or provides a licensed operator. The company or its licensed operator hires, trains, and pays the restaurant personnel and usually pays annual fees and commissions to the exchange service on the basis of restaurant's sales. Under this agreement, the exchange service receives a percentage of the restaurant's annual sales; annual licensing fees; and, in some cases, a signing bonus or minimum guaranteed commissions. GAO's analysis of fiscal year 1998 and 1999 financial data from the Army and Air Force Exchange Service and the Navy Exchange Service Command showed that the indirect method of operating name-brand hamburger restaurants was more profitable than the direct method, regardless of the restaurants' sales volume, restaurant type (free-standing or part of a food court), or location. GAO's investment analysis projected that if new name-brand, hamburger restaurants were to be built, the indirect method would provide a greater return on investment over a 20-year period. Other factors important in choosing between direct and indirect methods include financial and operating risks, customer service issues, employment opportunities for military dependents, and management control of a restaurant's operations. Although the Department of Defense's (DOD) policy on name-brand, fast-food restaurants establishes preferences for when the direct and indirect methods should be used, it does not provide enough guidance or criteria to determine which method to use or when it is appropriate to deviate from the policy. Also, DOD has not been actively involved in monitoring compliance with the policy. As a result, the exchanges have, over time, adopted operating philosophies and business models they believe best suit their particular circumstances.
Children enter state foster care when they have been removed from their parents or guardians and placed under the responsibility of a state child- welfare agency. Removal from the home can occur because of reasons such as abuse or neglect. When children are taken into foster care, the state’s child-welfare agency becomes responsible for determining where the child should live and providing the child with needed support. The agency may place the foster child in the home of a relative, with unrelated foster parents, or in a group home or residential treatment center, depending on the child’s needs. The agency is also responsible for arranging needed services, including mental-health services. Coordinating mental-health care for children in foster care may be difficult for both the medical provider and the caseworker depending on the complexity of the child’s needs, and because multiple people are making decisions on the child’s behalf. In addition, caseworkers in child-welfare agencies may have large caseloads, making it difficult for them to ensure each child under their authority receives adequate mental-health services. In 2011, the Child and Family Services Improvement and Innovation Act amended the Social Security Act to require states to identify protocols for monitoring foster children’s use of psychotropic medications and to address how emotional trauma associated with children’s maltreatment and removal from their homes will be monitored and treated. ACF requires states to address these issues in their required Annual Progress and Services Reports (APSR) and has provided guidance detailing how states are to address protocols for monitoring foster children’s use of psychotropic medications as part of the state’s APSR. Among other things, state monitoring protocols are to address screening, assessment, and treatment planning to identify children’s mental-health and trauma-treatment needs, including a psychiatric evaluation, as necessary, to identify needs for psychotropic medications; effective medication monitoring at both the client and agency level; informed and shared decision making and methods for ongoing communication between the prescriber, the child, caregivers, other health care providers, the child-welfare worker, and other key stakeholders. According to ACF, child-welfare systems that choose to pursue comprehensive and integrated approaches to screening, assessing, and addressing children’s behavioral and mental-health needs—including the effects of childhood traumatic experiences—are more likely to increase children’s sense of safety and provide them with effective care. In particular, ACF, CMS, and SAMHSA noted the role of evidence-based practices—interventions shown to produce measureable improvements or promising results—in decreasing emotional or behavioral symptoms. In addition, according to ACF, psychotropic medication use with young children, including infants, is of special concern since this population may be especially vulnerable to adverse effects, necessitating careful management and oversight. As we reported in December 2011, oversight procedures such as prescription monitoring help states to identify and review potentially risky prescribing practices in the foster-care population. Monitoring for appropriate dosage can be beneficial as it is important for any medication or combination of medications prescribed to use appropriate dosages to maximize the likelihood of effectiveness while also minimizing the chance of potential adverse effects. Monitoring for concurrent use of multiple psychotropic medications can be beneficial because, according to ACF, there is little evidence of the effectiveness of using multiple psychotropic medications at the same time and no research to support the use of five or more psychotropic medications. According to AACAP, treatment planning should include discussions by key stakeholders, such as prescribers and caregivers, about the assessment of target symptoms, behaviors, function, and potential benefits and adverse effects of treatment options. As we reported in December 2011, informed consent helps ensure that caregivers are fully aware of the risks and benefits associated with the decision to medicate with psychotropic medications and to accurately assess and monitor the foster child’s reaction to the medications. Expert reviews of 24 foster children’s foster and medical files in five selected states found that the quality of documentation supporting the prescription of psychotropic medication usage varied with respect to (1) screening, assessment, and treatment planning; (2) medication monitoring; and (3) informed and shared decision making. For each of our 24 cases, experts evaluated the foster and medical records across six categories they developed collaboratively that relate to screening, assessment, and treatment planning and provided their professional opinion for the case. Examples of screening, assessment, and treatment planning categories reviewed include the extent to which medical examinations, psychiatric evaluations, and evidence-based therapies were provided, and whether the impact of trauma was addressed by treatment. As shown in table 1, experts found that the quality of screening, assessment, and treatment planning varied among selected cases according to documentation reviewed. To see how experts scored all six categories, see appendix II. Experts found that medical pediatric examinations were mostly supported by documentation for 22 of 24 cases. Experts found in 2 of 24 cases the medical pediatric examinations were partially supported, such as when the medical pediatric exams were mentioned in the documentation, but not actually included in the records, preventing experts from evaluating what the examinations consisted of, and whether monitoring for psychotropic agents, such as assessing height, weight, or laboratory functions, was conducted. In one example, whereby experts scored the medical pediatric exam category as mostly supported in documentation, a child with a history of behavioral and emotional problems—including aggression and hyperactivity—was prescribed multiple ADHD medications. In this case, experts noted the child’s records had thorough psychological and pediatric assessments, with comprehensive discussions of diagnostic issues and medication rationale as well as good case-management summaries. Experts found that psychiatric evaluations were mostly documented for 12 of 17 applicable cases. Experts found 3 of 17 cases had partial documentation to support that the child had received a full psychiatric evaluation and 2 of 17 cases had no evidence that a psychiatric evaluation took place. For example, in 1 case with mostly supporting documentation, experts found that a child with a history of disruptive behavior, poor impulse control, anger outbursts, and sexual acting-out behaviors, among other things, received comprehensive psychosocial, psychosexual, and neuropsychological evaluations. Moreover, experts noted the child received special educational services and intensive therapeutic services, and visited a psychiatrist monthly for several months, then was referred back to the pediatrician with scheduled psychiatric check-ins as appropriate. Experts found that documentation reviewed supported that evidence- based therapies were mostly provided in 3 of 15 applicable cases where the child may have benefited from such treatment. However, in 11 of 15 cases, the experts scored the category as partial, such as for instances when some psychosocial or evidence-based therapies were documented as provided, but other evidence-based therapies that may have been more applicable or beneficial to the child were not provided, based on documents reviewed. In 1 of 15 cases, there was no documentation that evidence-based therapies were provided. In one case, experts found that a child initially placed in foster care as a toddler with over 10 foster-care placements—including group care from 14 to 16 years of age—had experienced early neglect, exposure to domestic violence, and physical abuse, and suffered from severe mood swings and explosive outbursts of anger. According to experts, a larger focus on evidence-based treatments such as trauma-focused cognitive behavioral therapy would have likely benefitted the child, but there was no documentation showing this occurred. However, according to the experts’ evaluation of the documentation, the child’s psychiatric diagnoses and medication regimens were stable over time, and treatment response, level of treatment intensity, and level of psychosocial functioning were all evaluated appropriately. In another example, experts found that a child removed from the home at age 13 after being physically assaulted by his mother and witnessing domestic violence received supportive psychotherapy and counseling, but there was no documentation of evidence-based psychotherapies, such as trauma-focused cognitive behavioral therapy. In addition, the forms used to document the therapy each represented 1 month of treatment with progress notes from each of the four weekly sessions. However, the report of the sessions, and often the entire content of the month’s psychotherapeutic work, was duplicated for months at a time. One week’s psychotherapy content was duplicated for over 1 year, raising questions about what services were actually provided. Experts found the documentation reviewed supported that the impact of trauma was mostly addressed by treatment for 3 of 14 applicable cases. However, for 8 of 14 cases, the impact of trauma was partially addressed by the treatment provided to children who had suffered from traumatic events, and in 3 cases there was no evidence that the trauma was addressed, according to documentation reviewed. For example, experts found in one case with no supporting documentation, that a child was placed in foster care at 5 years of age for neglect and physical abuse and diagnosed with a variety of different psychiatric conditions, including bipolar disorder, post-traumatic stress disorder (PTSD), schizotypal personality disorder, paranoia, and possible psychosis. According to experts, psychosis and personality disorders are typically considered adult conditions, and are usually not diagnosed in younger children. In this case, the child was treated with variable combinations of ADHD medications, antidepressants, anticonvulsants, and antipsychotics. While hospitalized at age 9 years, the child received an ADHD and antipsychotic medication at dosages that exceeded dosage guidelines based on FDA-approved or medical literature maximum dosages for this age group, and the medications were elevated to these high dosages over a 1 week period. During this time, the child’s brother died, yet this was not addressed or acknowledged during the psychiatric hospitalization, according to documentation. In another example, experts found that a child placed in foster care at 9 years of age due to neglect, physical abuse, and exposure to social chaos and domestic violence received treatment that partially addressed the impact of trauma on the child, according to documentation reviewed. In this case the child reported additional trauma, saying his mother’s boyfriend forced him to engage in sexual behavior with his sister. The child’s grandmother, who had been his caretaker, also died when he was 13 years old. Experts noted the history of trauma was acknowledged, but an evidence-based intervention was not provided to address the trauma, according to documents reviewed. For each case, experts reviewed and provided their opinions across seven categories related to medication monitoring, including the extent to which prescriptions were appropriately monitored by medical providers, appropriate dosages were used, and concurrent use of multiple medications was justified based on documentation reviewed. As shown in table 2, experts found that the quality of prescription monitoring by medical providers, and justification for dosage and concurrent use of multiple medications, varied among selected cases, based on documentation reviewed. See appendix II for a full listing of all categories experts reviewed related to medication monitoring. Experts found in 13 of 24 cases that prescriptions were mostly monitored by medical providers based on documentation reviewed. However, in 9 of 24 cases the prescriptions were partially monitored, and in 2 other cases there was no evidence that prescriptions were monitored by medical providers, according to documentation reviewed. For example, experts found in one case with partially supporting documentation that the monitoring of height, weight, vital signs, and metabolic effects of antipsychotic medications was lacking and that the records did not provide an adequate overview of medication risks and concerns regarding concurrent use of multiple psychotropic medications. According to the experts, these factors are important for medical providers to monitor in order to better assess the potential adverse effects of the medication and adjust as necessary to improve patient outcomes. In this case, the child entered foster care at 3 years of age and was noted to be aggressive, oppositional, not sleeping well, and hyperactive. Experts noted some of the antipsychotic prescriptions (quetiapine and olanzapine) were given “as needed” rather than scheduled, which, according to experts, is not considered a good medical practice in a traditional foster-care setting. Experts described the medication management as extremely aggressive, with complicated regimens and dosages at or above the standard recommendations. Furthermore, documentation that the medications were effective was lacking. For 13 of the 24 cases, experts found that the dosages were mostly supported for the children’s medications based on documentation reviewed. Although experts did not rate any cases as having no support for the dosages for the entire medication regimen, in 11 of 24 cases the experts noted that the justification to support a particular dosage level was partially supported by the documentation. For example, experts found in 1 case with partially supporting documentation, that a child concurrently on seven different psychotropic medications received a dosage for an ADHD medication (Adderall) exceeding dosage guidelines based on FDA-approved or medical literature maximum dosages for children and adolescents. Moreover, the documentation showed the child received a very small dose of an antipsychotic medication (quetiapine), suggesting that this agent was used for sleep, which experts said is not considered a good medical practice. In this case, the child was removed from the home at 14 months for, among other things, neglect and physical abuse. Experts found that for 5 of 20 applicable cases, concurrent use of multiple psychotropic medications was mostly supported based on documentation. However, 14 of 20 cases included documentation that partially supported the concurrent use of multiple medications, and 1 case did not include any documentation to support concurrent use. For example, experts found in one case with partially supporting documentation that a toddler diagnosed with ADHD/oppositional defiant disorder and bipolar disorder was treated with complicated medication regimens, including mood stabilizers and antipsychotics, when other nonmedication interventions could have been considered, based on documentation reviewed. In this case the child was prescribed an ADHD medication (methylphenidate) and an antipsychotic medication (quetiapine) at 3-½ years of age. An ADHD medication (clonidine) and mood-stabilizing medication (oxcarbazepine) were tried by the time he was 4 years of age, and the child was maintained on as many as four psychotropic medications concurrently. As a 6-year-old, the child was treated with an antipsychotic (paliperidone) that has not been studied in children this age. There was limited discussion of potential risks or side effects though there were several reported adverse effects, including insomnia, agitation, and a possible movement disorder, potentially due to the use of antipsychotic medications, according to documentation reviewed. For each of our cases, experts evaluated the foster and medical records for information related to informed and shared decision making— specifically, documentation of informed consent and communication between treatment providers. As shown in table 3, experts found that documentation to support informed consent and communication between treatment providers varied among selected cases reviewed. Experts found that informed-consent decisions were mostly documented in 5 of 23 applicable cases. In 11 of 23 cases experts found partial documentation of informed consent—such as when some, but not all, medications prescribed to the child included documentation of informed consent—and 7 other cases did not include any documentation of informed consent. For example, in one case, experts reported there was no documentation of informed consent, psychiatric evaluation, psychiatric diagnosis, or monitoring of antipsychotic medication. In this case, the child was prescribed an antianxiety medication (buspirone), an antipsychotic medication (risperidone), and an ADHD medication (clonidine) at 4 years of age, presumably to treat psychiatric symptoms that interfered with his functioning, including short attention span, wandering off, self-injury, and aggression. However, experts noted the documentation was too sparse to determine why the psychotropic medications were prescribed, and the indications, monitoring, and side effects could not be evaluated. Experts found that communication between treatment providers was mostly documented in 15 of 23 applicable cases. However, communication between treatment providers was partially documented in 5 of 23 cases, and there was no evidence that such communication occurred in 3 of 23 cases. For example, experts found in one case with partially supporting documentation that a teenage foster child with cognitive delays and fetal alcohol effects/exposure was diagnosed with ADHD and oppositional defiant disorder, and the quality of documentation showing communication between treatment providers varied by the child’s placement setting. When the child was placed in a residential treatment facility, the communication between treatment providers was better documented than when the child was placed in a foster home. However, there was no clear documentation of communication between inpatient and outpatient providers and there was no clear evidence in the foster care files that the recommendations made by inpatient providers were actually provided as part of outpatient care. Of the 24 cases reviewed, 9 were infant cases that the experts evaluated to determine whether the prescriptions were for psychiatric or non-mental- health reasons. Experts found in 4 of 9 infant cases reviewed that the prescription of psychotropic medication was for non-mental-health purposes, based on documentation reviewed. However, experts found that in 2 of 9 cases the infants were prescribed psychotropic medications for psychiatric reasons, and the rationale and oversight for such medications were partially supported by documentation. In 3 of 9 infant cases, experts were unable to discern whether the psychotropic medications were prescribed to infants for mental-health purposes or for some other medical reason, based on documentation reviewed. These results are summarized in table 4, below. Experts found in two of nine infant cases that an antianxiety medication (hydroxyzine) was prescribed to treat skin conditions such as a rash and itchiness, and was not used for psychiatric purposes. In two other of nine other infant cases reviewed, an ADHD medication (clonidine) was used to treat sleep and irritability in children who had severe brain damage, and who by the clinical descriptions were inconsolable. For each of the above infant cases, experts agreed that there are no established standards for treating problems associated with devastating neurological impairment in infants. According to experts, although other medications, and possibly nonmedication interventions, could have been used instead of clonidine, the decision to treat was based on humanitarian reasons, and may have been necessary to maintain the child in the foster home given the marked distress displayed by the infants in these two cases. While physicians may use their discretion to prescribe these psychotropic medications to infants in these rare situations, non-mental-health uses still carry the same risk of adverse effects, including, for the ADHD medication clonidine, lowered blood pressure, changes in heart rate, and the potential for sudden death, and should therefore be carefully monitored. Experts found in two of nine infant cases reviewed that the psychotropic medications were prescribed for psychiatric reasons, yet the justification for such prescriptions was not clear based on documentation. For example, experts found in one infant case that the child was prescribed an antidepressant (amitriptyline) at 9 months of age, and a prescription for an ADHD medication (clonidine) was added at 15 months of age to target complications of his neurological condition, including self-injurious behaviors, agitation, and aggression. Experts said there is no systematic research supporting the use of amitriptyline for self-injurious behaviors in any age group and the medication carries significant potential side effects, including cardiac side effects, and has been associated with sudden death in young children. Additionally, according to experts, amitriptyline can cause or exacerbate corneal ulceration, a painful condition for which this toddler was being treated, and which reportedly exacerbated the child’s agitation. The case notes focus on medical issues with limited discussion of rationale, efficacy, or tolerability of psychotropic medications. In another infant case, experts found that clonidine was prescribed for sleep and behavioral issues, but the records did not show that the associated risks of the medications were discussed, and informed consent was not documented. According to experts, the medical records in this particular case also included a note from the prescribing doctor when the child was 20 months of age stating that clonidine was not a psychotropic medication while also stating that the medication was for behavioral problems. Experts found in three of nine infant cases reviewed that documentation was unclear as to whether the psychotropic medications were prescribed for mental or non-mental-health purposes. For the first infant case with unclear documentation, experts noted that the child received a 2-month trial of an ADHD medication (clonidine) at 16 months of age, which experts stated they presumed was prescribed for irritability or difficulty sleeping, based on available documentation, but the actual indications were not documented. In the second infant case with unclear documentation, experts’ review showed the child received a number of different anticonvulsants to try to improve seizure control. However, the infant was also prescribed a 2-month trial of an antianxiety medication (clonazepam) as a 1-year-old, and, according to the experts, the records did not indicate whether the medication was prescribed to treat the seizures or for psychiatric purposes. In the third infant case with unclear documentation, experts reported the child was prescribed an antianxiety medication (hydroxyzine), presumably to treat a skin irritation; however, there were no notes describing the rationale for the medication. The experts agreed that prescriptions of psychotropic medications to infants carries significant risk as there are no established mental-health indications for the use of psychotropic medications in infants and the medications have the potential to result in serious adverse effects for this age group. Selected states have policies and procedures that are intended to provide oversight of psychotropic medications given to foster children. In addition, HHS has issued guidance, provided technical assistance, and facilitated information-sharing efforts among state child-welfare and Medicaid officials related to oversight of psychotropic medications for children in foster care. However, additional HHS guidance could help state child- welfare and Medicaid officials manage psychotropic medications as states transition prescription drug benefits to managed care. To varying degrees, each of the five selected states we reviewed has policies and procedures designed to address the monitoring and oversight of psychotropic medications prescribed to children in foster care. Some variation is expected because states set their own oversight guidelines. However, the 2011 Child and Family Services Improvement and Innovation Act required states to establish protocols for the appropriate use and monitoring of psychotropic medications prescribed to children in foster care, which ACF described in a 2012 program instruction. According to ACF, the unique factors of each state, such as whether the child-welfare service-delivery structure is state- or county- administered, the type of Medicaid delivery system in place, and the availability of qualified practitioners, may influence how officials develop oversight protocols. Thus, according to ACF, each state needs to carefully assess existing oversight mechanisms and evaluate options in light of how they fit with the state’s own set of needs and challenges. As part of ACF’s 2012 program instructions, states are to address protocols for monitoring foster children’s use of psychotropic medications as part of the state’s APSR, which include protocols to address: (1) screening, assessment, and treatment planning mechanisms; (2) effective medication monitoring at both the client and the agency-wide level; and (3) shared decision making and communication among the prescriber, the child, caregivers, other health care providers, and the child-welfare worker. Below are examples of selected states’ policies and procedures—based on documents we reviewed and interviews with state Medicaid and child-welfare officials—that are intended to provide oversight of psychotropic medications to children in foster care. The information is presented using the same categories discussed above for experts’ review of case studies. We did not assess the extent that these activities are being implemented effectively in the states. Each of our five selected states require that children in foster care receive medical examinations. For example, officials reported that in Oregon a child must receive a medical examination within 30 days and a mental- health exam within 60 days after the child enters the foster-care system, whereas Michigan officials said that both the medical and mental-health exams are to occur within 30 days. All five selected states’ foster-care programs use some type of functional assessment or screening tool, such as the Child and Adolescent Needs and Strengths (CANS), for screening and treatment planning, which may prompt a referral for a psychiatric evaluation as deemed appropriate. However, according to foster-care officials from Massachusetts, the CANS assessment tool is not sufficient to screen for a child’s exposure to trauma and there is a need for a separate trauma-screening mechanism. Medicaid and foster-care officials from Texas told us in July 2013 that they are working to research and develop a comprehensive psychosocial- assessment process with trauma screening/assessment components that is tailored to the unique needs of children in foster care. In April 2014, officials estimated that the process will take at least another year to implement and may be phased in so they could evaluate the effectiveness and refine the process. Each of our five selected states has taken action to increase children’s access to evidence-based therapies. For example, Oregon mental-health officials said that state law requires that 75 percent of the funding for mental-health agencies is to be used for evidence-based practices, and that the state surveys its mental health providers every 2 years on their utilization of evidence-based practices and reports these results to the state legislature. Oregon mental-health officials also said site reviews of mental-health providers occur every 3 years to make sure the providers are using practices on the state’s approved list of evidence-based practices and if deficiencies are identified, correction plans are developed. As another example, Massachusetts’s foster-care agency— through a federally funded grant—has provided evidence-based training on trauma-focused cognitive behavioral therapy, child-parent psychotherapy, and attachment self-regulation to both general practitioners and child-welfare staff to raise awareness and improve methods for treating and overseeing the child’s overall health. Each of our five selected states has taken action to improve focus on trauma-related needs of children in foster care. For example, Oregon was awarded a 3-year technical assistance grant by the Center for Health Care Strategies in April 2012. According to Oregon officials, one of the goals of this grant is to better understand the impact of trauma on emotions, behavior, and relationships, and to support training and policy development in this area. Beginning in May 2012, Texas implemented a 5-year strategic plan regarding trauma-informed care across the state for foster-care children. To do this, the state Medicaid program, foster-care agency, and managed-care organization (MCO) under contract are all working together to build a trauma-informed care system by incorporating trauma screening/assessment into psychosocial-assessment processes, enhancing clinical capacity to provide trauma-focused, evidence-based psychosocial therapy, training key stakeholders, and incorporating the principles of trauma-informed care into child-welfare policy and practices, according to Texas officials. All five of the selected states have designed a mechanism to coordinate and share some or all Medicaid prescription claims data with the state’s foster-care agency to help monitor and review cases based on varying criteria, such as prescriptions for children under a particular age, high dosages, or concurrent use of multiple medications. For example, according to Florida Medicaid officials, beginning in 2011 the state began requiring documentation of safety monitoring, such as metabolic monitoring, and body-mass-index information, to be included as part of the prior-authorization review process before particular medication regimens are approved for reimbursement. However, these reviews are limited to those prescription claims paid for on a fee-for-service basis. Beginning in October 2014, foster children in Florida are to receive all of their Medicaid benefits through a third-party MCO, and it was unclear to state Medicaid officials how MCOs will provide oversight of psychotropic medications after the transition from fee-for-service to managed care occurs. Massachusetts uses both a fee-for-service model and MCOs to administer prescription claims benefits. Massachusetts child-welfare officials said that in the fee-for-service program, certain parameters, such as children in foster care prescribed four or more psychotropic medications, or two or more psychotropic medications of the same class, or children less than 6 years old prescribed a psychotropic medication, are flagged and forwarded to a child psychiatrist for additional review. However, among children served by MCOs, state Medicaid officials said that MCOs flag cases for children prescribed psychotropic drugs who are less than 6 years old, but state Medicaid officials were uncertain how MCOs followed up on these cases. In Texas there is a single MCO used to coordinate all prescription claims and medical services for children in foster care, and this organization works closely with the state foster-care agency to identify and monitor psychotropic medication use among children in foster care. All five of the selected states have designed measures to review certain prescriptions that have dosages above a particular threshold. For example, in February 2005, Texas developed psychotropic drug- utilization parameters that outline what prescribing scenarios require an additional review, and these parameters were updated in January 2007, December 2010, and September 2013. Prescriptions that exceed usual recommended dosages for the child’s age trigger an additional review from a child psychiatrist. Similarly, in 2012, Michigan’s Medicaid and foster-care agencies began identifying and reviewing foster children’s prescriptions if the medication exceeds the recommended dosages. According to officials, Florida, Massachusetts, and Texas Medicaid programs also require prior authorizations before a prescription is approved for reimbursement for various prescribing scenarios specific to psychotropic medications. As stated in the section above concerning prescription monitoring, state Medicaid officials from Massachusetts and Florida told us they are still in the process of determining to what extent monitoring and oversight protocols—including prior authorizations— function for children in foster care who are prescribed medications through MCOs. All five of the selected states have designed measures to review prescriptions for concurrent use of multiple medications to a varying extent. For example, the MCO that handles prescription claims for children in foster care in Texas monitors and completes additional reviews for concurrent prescriptions, and shares that information with the state foster-care and Medicaid agencies for the following medication regimens as stated in the September 2013 Texas Utilization parameters: four or more concurrent psychotropic medications; two or more concurrent antidepressants; two or more concurrent antipsychotic medications; two or more concurrent stimulant medications; and three or more concurrent mood-stabilizer medications. Similarly, since 2012, the Michigan Medicaid agency monitors concurrent use of multiple medications using criteria, including four or more concurrent psychotropic medications, or two or more concurrent psychotropic medications within the same class, and shares this information with the state foster-care agency to facilitate additional reviews. Each of the above prescribing scenarios triggers an additional review that may include discussions with the prescriber to review the details and justification in support of the prescriptions. As mentioned previously in this report, Florida, Massachusetts, and Texas Medicaid programs also require prior authorizations before a prescription is approved for reimbursement for various prescribing scenarios specific to psychotropic medications. However, as stated above concerning prescription monitoring, state Medicaid officials from Massachusetts and Florida told us they are in the process of determining to what extent monitoring and oversight protocols—including prior authorizations— function for children in foster care prescribed medications through MCOs. Each of the five selected states require informed consent for psychotropic medications, but state practices vary. For example, according to agency officials, individuals authorized to give informed consent for a foster child vary across states. In Oregon, foster parents are not authorized to give informed consent for children in state custody—the foster child’s case supervisor provides informed consent for psychotropic medications. As another example, officials from Texas told us that according to state law, when the court places a child in the custody of the state foster-care program, the court must authorize an individual or the child-welfare agency to consent to medical care for a child in foster care. When the court authorizes the child-welfare agency, the agency must designate a medical consenter—which typically includes emergency-shelter employees or live-in caregivers if the child is placed in community settings, or child-welfare staff when children are placed in facilities such as residential treatment centers. Four of five selected states have some limitations regarding the extent to which a child’s medical history is available to treatment providers. For example, in Oregon, medical providers have access to a child’s prescription claims and medical history so long as the child was treated by a medical provider within the same Coordinated Care Organization (i.e., MCO), though the accessibility of information varies by each Coordinated Care Organization and is largely unavailable from competing Coordinated Care Organizations within the state. As another example, state Medicaid and foster-care officials from Michigan said they were in the process of developing electronic health records to improve access to information for prescribers, but noted that privacy concerns and legal limitations make it very difficult to share medical information across various medical providers. Texas is unique in that the state uses a single MCO to coordinate all prescription claims for children in foster care, which gives all participating medical providers access to prescription claims and the child’s medical history electronically. Each of the five selected states, to varying extent, have designed measures to review prescriptions of psychotropic medications based on the child’s age, which includes prescriptions to infants. For example, Oregon officials said that state law requires an annual review of medications by a licensed medical professional or qualified mental-health professional with authority to prescribe medications, other than the prescriber, if the child is covered by Medicaid and under the age of 6 years. Similarly, since 2012, Michigan’s foster-care agency reviews medical records of all children in foster care less than 1 year old who are prescribed psychotropic medication to determine whether the prescription was for psychiatric purposes or non-mental-health reasons. As mentioned previously in this report, Florida, Massachusetts, and Texas Medicaid programs also require prior authorizations before a prescription is approved for reimbursement for various prescribing scenarios specific to psychotropic medications. Officials from Massachusetts and Florida told us they are in the process of determining how monitoring and oversight currently function for children in foster care who are prescribed medications through MCOs. In response to concerns and our December 2011 report recommendation related to the need for additional guidance for the prescribing of psychotropic medications for children in foster care, HHS’s ACF has taken actions to improve the capacity of states’ child-welfare agencies to effectively respond to the complex needs of children in foster care. As previously mentioned in this report, ACF issued a program instruction in April 2012 to help states implement the new requirements in the Child and Family Services Improvement and Innovation Act regarding the development of protocols for oversight of psychotropic medication. In addition, since our December 2011 report, ACF has worked collaboratively with CMS and SAMHSA to help states strengthen oversight of psychotropic medications to children in foster care by emphasizing the need for collaboration between state Medicaid, child- welfare, and mental-health officials in providing oversight; providing technical assistance; and facilitating information sharing. Several initiatives were performed, including the following: CMS and SAMHSA participated in an ACF-led 2012 webinar series to help provide states with technical assistance in developing oversight and monitoring plans for psychotropic medications, as required by the Child and Family Services Improvement and Innovation Act. Using a question-and-answer format, the webinars featured experts, including researchers, child psychiatrists, and ACF staff, who provided ideas and feedback to state officials on planning efforts. In August 2012, ACF, CMS, and SAMHSA cohosted a conference for state child-welfare, Medicaid, and mental-health officials on strengthening the management of psychotropic medications for children in foster care. Conference sessions focused on effective collaborative medication monitoring, as well as creating data systems to facilitate collaboration, among other things. According to ACF, CMS, and SAMHSA officials, the conference was an opportunity for states to talk and share practices. According to ACF officials, representatives from 49 states attended, including officials from 4 of the 5 states covered by our review. Officials from one of these states said the conference was beneficial. Officials from another state said participation challenged them to augment their system; officials from another state said it was helpful to hear what other states were doing; and officials from a fourth state said that it was important for the three federal agencies to have common goals, which would help sustain interagency collaboration at the state level. In addition, CMS officials said the issue of psychotropic medications was a catalyst that caused the HHS agencies to look at broader issues related to mental health, including trauma-informed care and the use of mental- health screening tools and evidence-based therapies. ACF, CMS, and SAMHSA have undertaken several efforts, including the following: In 2012 and 2013, ACF announced funding opportunities for projects supporting the comprehensive use of evidence-based screening and assessment of mental and behavioral health needs, among other things. In March 2013, CMS issued guidance informing states about resources available to help meet the needs of children under the Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) Medicaid benefit. Under the EPSDT, eligible individuals, such as children in foster care, are to be provided periodic screenings that include assessments of physical and mental-health development, as well as any medically necessary screenings to detect suspected illnesses or conditions not discovered during periodic exams. Results from screenings may trigger the need for further assessment to diagnose or treat a mental-health condition. In July 2013, ACF, CMS, and SAMHSA officials cosigned a letter to state child-welfare, Medicaid, and mental-health officials encouraging the integrated use of trauma-focused screening, functional assessments, and evidence-based practices to improve child well- being. In particular, federal officials noted that a high percentage of children in state foster care have been exposed to traumatic events and that there is reason to believe that problematic use of psychotropic medications is a reaction to the complexity of symptoms among children exposed to trauma and the lack of appropriate screening, assessment, and treatment. Figure 1 below lists initiatives undertaken since our previous report by ACF, CMS, and SAHMSA. In addition, according to ACF officials, in collaboration with SAMHSA and others, ACF plans to issue guidance in August 2014 to foster parents regarding psychotropic medication to enhance their understanding of these medications. Three of five states included in our review use, or are transitioning from fee-for-service to, MCOs to administer prescription-drug benefits for mental-health medications; however, Medicaid officials from two of those three states reported that their states had conducted limited planning to ensure appropriate oversight of MCOs administering psychotropic medications—which creates a risk that state controls instituted in recent years under fee-for-service may not apply to managed care—and could benefit from additional federal guidance. In Massachusetts, most foster children receive drug benefits through fee-for-service, according to state Medicaid officials, though some children receive these benefits through MCOs. Under fee-for- service, beginning in 2012, state Medicaid prescription claims data were to be provided to the state child-welfare agency to monitor and facilitate additional reviews, as necessary, for children prescribed medications in foster care. For example, according to child-welfare officials, cases that meet certain criteria—children less than 6 years old prescribed a psychotropic medication; children prescribed four or more psychotropic medications; or children prescribed two or more psychotropic medications in the same class—are flagged and forwarded to a child psychiatrist for further review. According to Massachusetts Medicaid officials, MCOs currently review cases when a child less than 6 years old is prescribed a psychotropic medication. State Medicaid officials said they did not know how MCOs followed up on cases last year. However, state Medicaid officials and other members of an interagency committee on psychotropic medications have met with MCO administrators to learn what they are doing to review cases and will continue to monitor MCOs, according to Massachusetts state officials. Such operational information is important for the child-welfare agency to obtain to help ensure that appropriate oversight of psychotropic medication prescribed to foster children occurs. In addition, as part of the state’s efforts to improve the prescribing, authorization, and monitoring of psychotropic medications, a Massachusetts interagency committee on psychotropic medications and foster children noted that MCOs’ and the state’s primary-care clinician plan program’s role in the prior-authorization process has not been determined, in particular whether these organizations will have to assume responsibility for assuring that psychiatrists in their network adhere to the state’s prescribing and monitoring practices. Florida Medicaid officials said that beginning in 2014, MCOs will provide Medicaid participants, including foster children, with mental- health services, but it was unclear to state Medicaid officials how MCOs will provide oversight of psychotropic medications after the transition from fee-for-service to managed care occurs. Such operational information is also important for Florida’s child-welfare agency to help ensure that appropriate oversight of psychotropic medication prescribed to foster children occurs. Florida Medicaid officials said that there will probably no longer be point-of-sale controls, which were instituted under fee-for-service in 2011. These controls, for example, required prescribers to submit forms indicating that safety monitoring, such as monitoring for signs of abnormal involuntary movement and metabolic monitoring, was performed for certain medications. If such point-of-sale controls are not continued under MCOs, then safety monitoring developed by the state under fee-for-service may not continue for children administered medications through MCOs. ACF officials we met with noted that state child-welfare have experienced challenges coordinating with state Medicaid programs regarding the transition to MCOs, particularly with regard to data sharing. There are indications that the number of states using MCOs to administer drug benefits may increase. In 2012, the HHS Office of Inspector General (OIG) reported that 16 states used MCOs to administer drug benefits, and another 5 states had, or were planning, to switch to MCOs as a result of the Patient Protection and Affordable Care Act expansion of the Medicaid drug-rebate program, which allows states to obtain rebates from manufacturers for covered outpatient drugs. Previously, medications dispensed by MCOs were excluded from such rebates. According to Standards for Internal Control in the Federal Government, internal controls should generally be designed to assure that ongoing monitoring occurs in the course of normal operations, and is performed continually and ingrained in the agency’s operations. ACF requires states to develop effective medication monitoring at the agency and patient level. To this end, ACF, CMS, and SAMHSA have developed guidance for state Medicaid, child-welfare, and mental-health officials related to the oversight of psychotropic medications underscoring the need for collaboration between state officials to improve prescription monitoring. However, this guidance does not address oversight within the context of a managed-care environment, in which states rely on a third party to administer benefits such as psychotropic medications. Additional guidance from HHS that helps states prepare and implement monitoring efforts within the context of a managed-care environment could help ensure appropriate oversight of psychotropic medications to children in foster care. Since our December 2011 report, HHS has issued guidance regarding the oversight of psychotropic medications among children in foster care and has undertaken collaborative efforts to provide guidance and promote information sharing among states. In addition, HHS efforts have focused on using mental-health screening tools and providing therapies that address trauma, which seek to ensure that the mental-health needs of children in foster care are appropriately met. However, many states have, or are transitioning to, MCOs to administer prescription-drug benefits, and, as our work demonstrates, selected states have taken only limited steps to plan for the oversight of drug prescribing for foster children receiving health care through MCOs—which creates a risk that controls instituted in recent years under fee-for-service may not remain once states move to managed care. Additional guidance from HHS that helps states prepare and implement monitoring efforts within the context of a managed-care environment could help ensure appropriate oversight of psychotropic medications to children in foster care. To assist states that rely on or are planning to contract with an MCO to administer Medicaid prescription benefits, and to help provide effective oversight of psychotropic medications prescribed to children in foster care, we recommend that the Secretary of Health and Human Services issue guidance to state Medicaid, child-welfare, and mental-health officials regarding prescription-drug monitoring and oversight for children in foster care receiving psychotropic medications through MCOs. We provided a draft copy of this report to HHS and the state foster-care and Medicaid agencies of the five selected states for their review. HHS, the Florida Agency for Health Care Administration, and the Massachusetts Executive Office of Health and Human Services provided written comments that are summarized below and reprinted in full in appendixes III, IV, and V, respectively. HHS, Massachusetts, Oregon, and Texas provided technical comments, which we incorporated as appropriate. Michigan did not have any comments on the report. In its response, HHS concurred with our recommendation to issue guidance to state Medicaid, child-welfare, and mental-health officials regarding prescription-drug monitoring and oversight for children in foster care receiving psychotropic medications through MCOs, and stated that CMS will work with other involved agencies to coordinate guidance between CMS and other HHS agencies. HHS further stated that guidance can be targeted regarding the use of MCOs for the foster-care population, but noted that previously issued guidance to state agencies from HHS already applies. However, the guidance that HHS referred to in its written comments is not specific to oversight within the context of a managed- care environment, and officials from the states in our review agreed that additional federal guidance could be beneficial. Therefore, we continue to believe that specific guidance to help states prepare and implement monitoring efforts within the context of a managed-care environment is needed to help ensure appropriate oversight of psychotropic medications to children in foster care. In its written comments, the Florida Agency for Health Care Administration did not indicate whether it agreed or disagreed with our findings and recommendation, but said that it appreciated our efforts to evaluate Florida’s Medicaid program and the reimbursement of psychotropic medications for foster children. Florida’s response also provided additional information about the state’s future plans for using managed-care plans and drug-utilization review requirements. For example, Florida’s response stated that these managed-care plans must adhere to Florida statute requirements regarding prior-authorization procedures for covering medically necessary services, including prescription-drug services. However, the extent to which drug-utilization reviews and point-of-sale controls currently used by the state under fee- for-service would apply after transitioning to MCOs is still unclear. For example, as we discussed in the report, if point-of-sale controls are not continued under MCOs, then safety monitoring developed by the state under fee-for-service may not continue for children administered medications through MCOs. In its written comments, Massachusetts’s Executive Office of Health and Human Services did not indicate whether it agreed or disagreed with our findings and recommendation, but thanked us for recognizing the work Massachusetts has done in the area of psychotropic medications being administered to children in foster care and agreed that discussion and investigation of this topic is timely and important to improve the health and welfare of children in foster care. In its response, Massachusetts noted that MCO contracts require the MCOs to monitor psychotropic prescribing for members under the age of 19 in accordance with guidelines established by Massachusetts’s Psychoactive Medications in Children Working Group. Massachusetts also stated it conducted an operational review with each MCO to ensure that there is an established follow-up process for cases that are flagged, and that it continues to monitor MCOs closely to assure they remain in compliance with this contract requirement. However, the extent to which Massachusetts has developed guidelines, conducted operational reviews, and monitored for MCO compliance is still unclear. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, relevant state agencies, and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. To provide a clinical perspective on our cases, we contracted with two child psychiatrists who have clinical and research expertise in the use of psychotropic medications in children. We reviewed the curriculum vita for each expert who responded to our contract solicitation to determine whether the expert met all of the following criteria: is a medical doctor; is trained in child psychiatry; is board certified in child psychiatry; conducted relevant research or had relevant experience; and is a member of a relevant association (e.g., American Academy of Child and Adolescent Psychiatry). We also conferred with officials from the National Institute of Mental Health. We selected Jon McClellan, MD, and Michael Naylor, MD. Dr. McClellan is an attending psychiatrist at Seattle Children’s Hospital; a professor at the University of Washington School of Medicine; and the medical director at Washington’s Child Study and Treatment Center, the children’s psychiatric hospital for the state of Washington. He is board certified in psychiatry and child and adolescent psychiatry, has conducted research regarding mental illness in children, and contributed to a forum on psychotropic medication use amongst children in foster care. Dr. Naylor is an associate professor at the University of Illinois at Chicago, School of Medicine, and the director of the Behavioral Health and Welfare Program, which was formed to address the mental-health needs of the most severely disturbed children in state care. He directs the Clinical Services in Psychopharmacology program, which provides an independent review of all psychotropic medication consent requests for foster children in Illinois. He is board certified in child and adolescent psychiatry, general psychiatry, and sleep-disorders medicine. The figure below contains the ratings assigned by experts for the quality and quantity of certain types of documentation contained in each child’s foster and medical files. Cases are organized by the criteria used to randomly and nonrandomly select them and include reviews of cases from each of our selected states—Florida, Massachusetts, Michigan, Oregon, and Texas. In addition to the contact named above, Matt Valenta, Assistant Director; Adam Anguiano; Erika Axelson; Scott Clayton; Marcus Corbin; Jennifer Costello; Dennis Fauber; Wilfred Holloway; Olivia Lopez; Flavio Martinez; Maria McMullen; Linda Miller; Sandra Moore; James Murphy; Joy Myers; Anna Maria Ortiz; April Van Cleef; Abby Volk; and Monique Williams made significant contributions to this work.
In December 2011, GAO reported that foster children in selected states were prescribed psychotropic medications at rates higher than nonfoster children in Medicaid in 2008. GAO was asked to further examine instances of foster children being prescribed psychotropic medications. For the five states included in GAO's 2011 report—Florida, Massachusetts, Michigan, Oregon, and Texas—this report: (1) assesses the extent that documentation supported the usage of psychotropic medication for selected cases; and (2) describes states' policies related to psychotropic medication and assesses HHS actions since GAO's 2011 report. GAO contracted with two child psychiatrists who conduct mental-health research and work on issues related to foster care, to provide clinical evaluations of 24 cases that GAO selected from the population of foster children prescribed psychotropic drugs in GAO's 2011 report. The case selections were based, in part, on potential health risk indicators, and the findings are not generalizable. GAO obtained medical and child-welfare documentation spanning children's time in foster care, and redacted personally identifiable information prior to experts' review of cases. GAO also analyzed federal guidance and selected states' policies and interviewed federal and state officials. Two experts GAO contracted with reviewed foster and medical records for 24 cases in five selected states and found varying quality in the documentation supporting the use of psychotropic medications for children in foster care. Experts examined documentation related to several categories, such as (1) screening, assessment, and treatment planning; and (2) medication monitoring. Screening, Assessment, and Treatment Planning. Experts' evaluation of this category included whether medical pediatric exams and evidence-based therapies—which are interventions shown to produce measureable improvements—were provided as needed, according to records. Experts found in 22 of 24 cases that medical pediatric exams were mostly supported by documentation. For example, in one case with mostly supporting documentation, experts found that a child with a history of behavioral and emotional problems had records documenting a medical pediatric exam and thorough psychological assessments, with comprehensive discussions of diagnostic issues and medication rationale. With regard to evidence-based therapies, experts found that 3 of 15 children who may have benefitted from such therapies were mostly provided such services, while 11 of 15 cases were scored as partial in this category, and in 1 of 15 cases there was no documentation that evidence-based therapies were provided. Medication Monitoring. Experts' evaluation of this category included the appropriateness of medication dosage and the rationale for concurrent use of multiple medications, according to records. Experts found appropriateness of medication dosages was mostly supported by documentation in 13 of 24 cases and partially supported in the other 11 cases. The rationale for concurrent use of multiple medications was mostly supported in 5 of the 20 cases where multiple medications were used, but 14 of 20 cases included documentation that partially supported concurrent use, and 1 case did not include documentation to support concurrent use. For example, experts found for one case that a child was prescribed four psychotropic drugs concurrently, when nonmedication interventions could have been considered. The rationale for the actions taken was partially supported by documentation. All of the five selected states—two of which pay health care providers directly through fee-for-service, and three of which use or are transitioning to a third-party managed-care organization (MCO) for prescription-drug benefits to some extent—have policies intended to address oversight of psychotropic medications for foster children. According to state officials, all five of the states require medical examinations for children in foster care. Since GAO's 2011 report, the Department of Health and Human Services' (HHS) Administration for Children and Families (ACF) has, among other things, worked with other federal agencies to provide informational webinars and technical guidance for states to improve oversight of psychotropic medications, but this guidance does not address third-party MCOs administering medications. Officials from two of the three states relying on MCOs described limited state planning for MCOs to monitor psychotropic medications. Because there are indications MCO use is increasing, additional HHS guidance that helps states implement oversight strategies within the context of a managed-care environment could help ensure appropriate monitoring of psychotropic medications prescribed to children in foster care. GAO recommends that HHS issue guidance to states regarding oversight of psychotropic medications prescribed to children in foster care through MCOs. HHS agreed with GAO's recommendation.
Title II of the Social Security Act, as amended, establishes the Old-Age, Survivors, and Disability Insurance (OASDI) program, which is generally known as Social Security. It provides cash benefits to retired and disabled workers and their dependents and survivors. Workers become eligible when they have enough years of earnings covered under Social Security; they and their employers pay payroll taxes on those covered earnings. In 1999, about 96 percent of all U.S. jobs are covered , and over 40 million people received $386 billion in benefits, which averaged about $800 per month or $9600 per year. The benefit formula takes into account the lifetime history of earnings and replaces a higher percentage of earnings for lower earners than for higher earners. In contrast, the Supplemental Security Income (SSI) program provides income support to eligible aged and disabled persons regardless of their earnings history. Funds for SSI benefits come from general revenues, not payroll taxes. Persons with income or assets that exceed certain thresholds are not eligible for SSI. In 2001, the maximum federal SSI monthly benefit is $531 for an individual and $796 for a couple and is reduced to reflect receipt of other income, including OASDI benefits. In December 1999, over 6.5 million people received federally-administered SSI benefits; of these about 6.3 million received a federal benefit and about 2.4 million received an SSI state supplemental benefit. In December 1999, the average monthly federal benefit was $342; the average monthly federally-administered state supplement was $111. Medicare’s Hospital Insurance benefits are generally provided automatically and free of premiums to persons aged 65 or older who are eligible for Social Security or Railroad Retirement benefits. Similarly, individuals who have been entitled to Social Security or Railroad Retirement disability benefits for at least 24 months are entitled to such benefits. In addition, Supplementary Medical Insurance benefits are available on a voluntary basis with a monthly premium to cover doctors’ services, tests, and a variety of over medical services. In 1999, Medicare paid a total of $210 billion in benefits and covered nearly 40 million enrollees. According to current estimates, the Hospital Insurance trust fund will be exhausted in 2029. Medicare beneficiaries and others who have low incomes and limited resources may also receive help from the Medicaid program. In 1998, Medicaid made $142 billion in payments for medical services for 41 million recipients, of which about 4 million were aged 65 or older and 6.6 million were disabled. Average payments were about $10,200 for the aged and $9,100 for the disabled. Roughly $32 billion was paid for nursing facilities. According to the OASDI Trustees’ 2001 intermediate, or best-estimate, assumptions, Social Security’s cash flow is expected to turn negative in 2016. In addition, all of the accumulated Treasury obligations held by the trust funds are expected to be exhausted by 2038. Social Security’s long- term financing shortfall stems primarily from the fact that people are living longer while having fewer children. As a result, the ratio of workers paying into the system to beneficiaries has been falling and is projected to decline from 3.3 today to about 2 by 2030. To address the program’s long-term financing shortfall, a variety of proposals have been offered. In choosing among proposals, we have suggested that policymakers should consider three basic criteria: the extent to which the proposal achieves sustainable solvency and how the proposal would affect the economy and the federal budget; the balance struck between the twin goals of individual equity (rates of return on individual contributions) and income adequacy (level and certainty of benefits); and how readily such changes could be implemented, administered, and explained to the public. Moreover, as we have said, reform proposals should be evaluated as packages that strike a balance among individual reform elements and important interactive effects. Overall evaluation of each proposal depends on the weight individual policymakers place on each criterion. From its inception, Social Security was intended to help reduce the extent of dependency on public assistance programs. As it has evolved, the program’s design has reflected that objective. Over time, that objective has come to be stated more broadly as helping ensure adequate incomes. While the Congress has never explicitly defined what constitutes an adequate level of benefits, it stated as early as 1939 that its objective was to “afford more adequate protection.” However, individual savings and other resources were also expected to play a significant role. In response to the grave economic problems of the Great Depression, President Franklin Roosevelt created the Committee on Economic Security in 1934 to study the economic insecurity that individuals faced and to make recommendations on how to address it. The committee’s recommendations became the basis of the Social Security Act of 1935, which created several programs to meet the needs of different population groups, including the aged. Two programs specifically addressed the aged population—Title I’s Old-Age Assistance (OAA) program and Title II’s Old- Age Insurance (OAI) program. OAA benefits, administered by the states with both state and federal funds, were intended to provide immediate cash income for millions of elderly persons without sufficient income for a decent subsistence. OAI benefits, administered by the federal government and funded by equal contributions from both employees and employers, were designed for younger workers to build up their rights to annuities in old age gradually. In effect, the contributions would purchase insurance to protect workers against lost wages when they became too old to work. In debating the creation of OAI, proponents made a variety of arguments in its favor and mentioned several objectives that it would serve. Of these, helping reduce dependency on public assistance was arguably the most fundamental. The Congress was clearly concerned that an increasing number of people were becoming dependent upon the public for their well-being; Social Security would eventually provide benefits that workers and their employers would pay for. Other objectives that were discussed in the debate included stimulating the economy by providing cash income that people would spend and opening up jobs for younger workers by freeing older workers to retire. Implicitly, the Congress designed Social Security benefits with a focus on replacing lost wages. The original formula computed benefits as a percentage of lifetime wages covered under the program in a way that favored lower earners, reflecting a special concern for their benefit levels. Social Security’s framers had targets in mind for benefit levels, but these targets did not appear to be based on any type of scientific research or data analysis. While the Congress made no assertions concerning whether the resulting benefits would be adequate, Senate and House reports stated respectively that under Social Security it would be possible to provide “more than reasonable subsistence” and “not merely subsistence but some of the comforts of life.” The House report also noted that the “benefits provided for workers who have been employed during substantially all their working life will probably be considerably larger than any Federal-aided State pensions could be.” As time passed, the Social Security program grew and evolved. Even before the first monthly benefits were paid in 1940, the Congress enacted amendments in 1939 to “afford more adequate protection to more of our people,” as House and Senate committee reports put it. Changes to benefit levels, coverage of earnings, and eligibility are especially relevant to the program’s adequacy goals. In addition, the introduction of new programs addressed specific needs, such as covering health care costs, promoting retirement saving, and promoting and protecting employer-sponsored pensions. Changes to monthly benefit levels came in different forms at different times. From 1939 until 1950, there were no changes to the benefit formula, and benefit levels, after adjusting for inflation, fell as a result. The 1948 Trustees’ Report expressed concern that inflation was diminishing the adequacy of Social Security benefits and presented a chart showing the decline in inflation-adjusted benefit levels. The 1950 amendments to the Social Security Act increased benefit levels substantially. Then, until 1972, periodic amendments made various ad hoc adjustments to benefit levels. Economic prosperity, along with actuarial methods that often left the Trust Funds with substantial surpluses, facilitated gradual growth of Social Security benefit levels through these ad hoc adjustments. In light of the steady growth of benefit levels, the 1972 amendments instituted automatic adjustments to constrain the growth of benefits as well as to ensure that they kept pace with inflation. Parameters of the benefit formula were automatically adjusted to reflect inflation, and the adjustments affected levels of benefits for both existing and new beneficiaries. However, wages grew more slowly and prices grew more quickly in the 1970s than they had historically. As a result, initial benefit levels grew faster than intended. The program’s first benefit reductions in 1977 attempted to correct for those unintended consequences of the 1972 amendments, and the resulting pattern of increasing and then declining benefit levels has become known as the “notch.” In the process, the benefit formula was redesigned so that initial benefits would generally increase with wages for each new group of beneficiaries. As individuals aged, annual cost-of-living adjustments would then increase benefits to keep pace with inflation. In effect, the new formula’s design would generally replace pre-retirement wages for similar individuals at a consistent rate across age groups. Implicitly, this episode illustrates the focus of the Congress on replacing wages and also identifies benefit levels that the Congress considered higher than they intended. The only other significant benefit reductions came in 1983 when the Congress delayed cost-of-living adjustments primarily to address short-term financing problems and gradually increased the retirement age to address long-term financing problems. In addition, a variety of other types of program changes had effects on the extent to which the program helped ensure income adequacy. As amendments extended Social Security coverage to more jobs, more workers would eventually receive benefits. Initially, Social Security only covered the roughly 60 percent of workers in “commerce and industry” whose wages could most easily be taxed and tracked. As the program matured, coverage was gradually extended to new groups of workers, such as farm workers, domestic workers, self-employed workers, and some federal and state government workers. Today, Social Security covers about 96 percent of all U.S. jobs . Moreover, various amendments extended eligibility to more types of beneficiaries. Under the 1935 act, only some retired workers were to receive benefits. The 1939 amendments extended benefit eligibility to wives, widows, children, and dependent parents age 65 and older. The 1956 amendments extended eligibility to disabled workers, and the 1958 amendments extended eligibility to their dependents. In addition, the 1956 and 1961 amendments extended eligibility to women and men, respectively, at age 62 for retired workers, spouses, and widow(er)s, though worker and spouse benefits taken before the full retirement age were reduced to take account of the longer period over which they would be paid. Outside Social Security, other legislation also addressed income adequacy in various ways. Other benefit programs were created and changed to help ensure adequate incomes. In 1965, Medicare and Medicaid were created to alleviate the historically increasing strains on incomes from paying for health care. In 1972, Title XVI’s Supplemental Security Income replaced Title I’s Old-Age Assistance. Moreover, as both House and Senate reports noted in 1939, “individual savings and other resources must continue to be the chief reliance for security.” Over the years, the Congress has enacted legislation to promote employer-sponsored pensions and make them more secure. The Congress has also enacted legislation to promote individual retirement savings and encourage greater work-force participation by the aged and disabled. Various measures have been developed to examine different aspects of income adequacy, but no single measure offers a complete picture. A universally accepted definition of “income adequacy” does not exist; focusing on a single measure would implicitly endorse the concept of adequacy it measures while dismissing other concepts. Several examples of three broad types of measures illustrate the range of relevant measures. Each measure has characteristics that reflect different outlooks on the issue, including how it is calculated, how it accounts for different types of households, how it accounts for geographic variations, and how it is updated over time. In addition, for any type of measurement, what types of income are counted presents a key issue. The first type of measure includes variations of dependency rates. Dependency rates speak to Social Security’s fundamental objective of reducing dependence on public assistance programs, such as SSI or state and local general assistance programs. Some sources have reported dependency rates over the years that reflect a wide variety of sources of income support while other sources report rates that only reflect federal income support programs. For example, as cited by congressional reports, the dependency rate of over 50 percent of the elderly in the 1930s reflected dependence on family members and private charities as well as public assistance. Moreover, public assistance includes a variety of federal, state, and local programs in addition to OAA and SSI. As a result of the extensive effort required to identify all sources of support, the most readily available annual dependency rate data reflects only dependency on OAA and SSI. Accounting for different types of households, geographic variations, and changes over time are not critical concerns in calculating the rates because the rates simply measure whether individuals or households receive public assistance, wherever they are, and whatever the eligibility criteria happen to be. However, the issues of geography and eligibility do raise questions about how to interpret the rates because benefit standards and eligibility provisions for public assistance programs have varied considerably by location and over time. The second type of measure includes rates that express the percentage of the population that has incomes below a given adequacy standard. For example, the poverty rate shows the percentage of individuals whose household income falls below the official poverty thresholds, which attempt to specify an income that would afford a minimal standard of living. Different thresholds apply for different types and sizes of households but are the same for every location in the country. The official poverty thresholds were originally developed in 1963 and were built upon a government family food plan. Initially, the thresholds were updated to reflect the change in the cost of the food plan, but since 1969, they have been updated annually to reflect changes in the Consumer Price Index (CPI). In 1969, the Bureau of the Budget established the thresholds as the official definition of poverty for statistical use in all executive departments. The poverty threshold is only one of many adequacy standards that have been developed over the years. Moreover, various government programs and descriptive statistics use different percentages of the poverty threshold, for example, 125 or 150 percent of poverty in determining benefits or eligibility. Some standards focus on determining the income level needed for a moderate subsistence, not merely a minimal one. The bases of the various standards include government-developed family budgets, expenditure data, income data, and even public opinion polls. The various adequacy standards have also used different approaches to capture household and geographic variations and to reflect changes over time. A variety of studies have evaluated the poverty threshold and explored possible changes to it. (See app. II.) The third type of measure, the replacement rate, speaks to Social Security’s objective of replacing lost wages, which is implicit in the program’s benefit formula. In contrast to other types of measures, it focuses on whether retirement income is sufficient to maintain the standard of living a given household enjoyed before retirement, not just meet some socially defined standard of adequacy. Generally, it is calculated as the ratio of retirement income in the first year of retirement to household income in the year immediately preceding retirement.However, the actual experience of a given household could easily involve phased-in retirement or situations where one spouse retires while the other continues to work. Such irregularities present problems in interpreting replacement rates for actual households. Still, these rates can be useful for demonstrating the effects of program changes by focusing on illustrative workers with standardized work experiences. With replacement rates, geographic variations and updating the measure over time are not relevant issues because the household’s own experience is the basis for the measure regardless of location or year. All of these types of measures depend significantly on what types of income are counted. Some dependency rates look only at specific sources of public assistance, while others attempt to reflect all types of public assistance and some even try to reflect dependency of private charities and family members. In the case of poverty rates, one criticism has been that before-tax income is compared with thresholds based on after-tax income. In the case of replacement rates, researchers have noted that the measures of retirement and pre-retirement incomes should be consistent, especially with respect to before- or after-tax status. Finally, a wide range of noncash benefit programs, notably Medicare and Medicaid, also support the standards of living of their beneficiaries though such benefits are not always reflected in measures of income adequacy. For example, replacement rates typically only consider cash income before and after retirement. Also, noncash benefits are not included as income in determining poverty status, and the living costs they support are not explicitly reflected in the poverty threshold against which income is compared. In particular, considerable debate surrounds how to treat medical care needs and resources in measuring adequacy. The adequacy of income for the elderly has generally increased since the 1930s, according to various measures. For example, dependence on public assistance has fallen, as have poverty rates for the elderly. The largest changes occurred in the first few decades of the program’s history; improvements in the past 20 years have slowed or even stopped, depending on the measure used. At the same time, Social Security has become the most important source of income for the elderly and disabled. Savings and other assets, employer-sponsored pensions, and earnings have also increased as sources of income. Still, relatively high poverty rates remain for subgroups that typically have low life-time earnings, whether for old-age or disabled beneficiaries. The dependency rate for the elderly has fallen from almost 22 percent in 1940 to about 6 percent in 1999, using a rate that only reflects OAA or SSI benefits and does not include dependency on relatives and friends. Meanwhile, receipt of Social Security benefits among the elderly has grown significantly from less than 1 percent to over 90 percent. (See fig. 1.) A 1938 Social Security Bulletin reported a dependency rate of 65 percent, which included assistance to those who were totally or partially dependent on friends and relatives. Among the elderly, OASDI beneficiaries outnumbered OAA beneficiaries for the first time between 1950 and 1955 and, by 1960, a majority received Social Security benefits. Since 1980, roughly 90 percent of the elderly have received benefits. The rapid increase in the percentage receiving benefits and the eventual leveling off illustrates the natural maturing of the Social Security system. When monthly benefits were first paid in 1940, only those just turning 65 received benefits; older individuals were not eligible. As each year passed, one additional age group was added to the beneficiary rolls, and more individuals from the earlier, ineligible age groups died. Poverty rates for the elderly have also declined, from 35 percent in 1959 to about 10 percent in 1999. (See fig. 2.) Since 1959, the elderly population has experienced the greatest reduction in poverty rates, compared with children 18 years and younger and adults aged 18 to 64. Examination of dependency and poverty rates for the elderly reveal that much of the improvements occurred during the early decades of the program. (See figs. 1 and 2.) The dependency rate declined at a much faster rate in the early years until about 1965 when declines slowed to a more level trend. Declines in the poverty rate for the elderly were most dramatic from 1959 to 1974 (more than 1 percent per year on average) and have continued since then, but at a slower rate. Over the same period that the income adequacy has increased for the elderly, Social Security has become the single largest source of retirement income. As discussed below, program changes have increased the real value of benefits, and more and more elderly have received benefits as the program has matured. Other sources of retirement income have also grown. Periods of economic prosperity have contributed to the growth of all sources of retirement income. Social Security’s benefit levels have generally increased over the years. Replacement rates for illustrative workers with steady lifetime earnings histories show how changes in the benefit formula have affected benefit levels because using such workers holds other factors equal that might also have an effect. (See fig. 4.) For example, using illustrative workers filters out the effects of changes in the covered population or changes in work and retirement patterns. The declining replacement rates during the early years reflect that no benefit increases were enacted until 1950; fig. 4 also shows a sharp increase in replacement rates that coincides with the 1950 amendments. From 1950 until the early 1970s, replacement rates fluctuated noticeably more from year to year than over other periods; this pattern reflects the ad hoc nature of benefit increases over that period. The rapid increases in the 1970s and the rapid decline in the early 1980s reflects the effects of the notch and efforts to correct it. The smoother pattern that appears since that time reflects the automatic indexing of benefits as enacted in 1977. While there have been many changes in the program for many reasons at different points in time, the replacement rates experienced by today’s new retirees are notably consistent with the levels that Social Security’s designers envisioned for a fully mature system over 60 years ago. At the same time that benefit levels have increased, so has the share of elderly receiving benefits. This is also true of employer-sponsored pensions, earnings, and income from saved assets. Like figure 1, figure 5 shows that the percentage of the elderly receiving Social Security benefits has increased as dependence on public assistance has declined. Figure 5 also shows that Social Security provides income to more elderly households than any other source of retirement income, although other sources have also increased in importance. The percentage of the elderly who receive income from employer pensions increased from 5 percent in 1937 to 43 percent in 1998. The percentage receiving income from saved assets increased from about 15 percent to over 60 percent. The percentage receiving earned income increased from 1937 to 1962 but dropped from 1962 to 1998. In addition to sources of cash income, noncash benefit programs that did not exist in the 1930s now play a major role in supporting the standards of living of Social Security beneficiaries. For example, Medicare is available to all Social Security beneficiaries aged 65 and older and all disabled beneficiaries after 24 months, among others. In addition to providing some income to nearly all elderly persons, Social Security is the largest source of income for most. In 1998, Social Security provided more than 50 percent of total income for 63 percent of aged beneficiaries, and it was the only source of income for about 18 percent of aged beneficiaries. Still, other sources of retirement income largely determine who will have the highest retirement incomes. Elderly households with the highest levels of income tend to have substantial income from employer pensions, earnings from employment, and saved assets, while those with the lowest incomes do not. For example, in 1996, 18 percent of all aged beneficiary units without earnings from employment were poor as compared with only 2 percent who received earnings. Income adequacy has also improved substantially for specific subgroups of beneficiaries, such as the very old (85+ years of age), minorities, women, singles, widows, and the disabled. However, even with those improvements, significant levels of poverty remain. This fact largely reflects that lifetime earnings and access to other sources of retirement income tend to be lower among such groups. Social Security is a major component of retirement income for these sub-populations. For example, in 1998, when we exclude Social Security income from total income, 67 percent of unmarried women aged 85 and over have income that falls below the poverty line. As figure 6 shows, poverty rates are higher than average for older age groups, for women, for minorities, and those living alone. Those individuals in older age groups are less likely to have pension benefits or income from saved assets. Women also experience high rates of poverty as compared to men. Of the 3.2 million aged persons who were poor in 1999, 2.2 million were women. Minorities such as Hispanics and blacks experience higher levels of poverty than their white counterparts, as do unmarried women and women living alone. Percent of elderly households with income below poverty Poverty rates also vary by living situation. In 1999, elderly persons living alone were more likely to be poor (14 percent of men and 20 percent of women) than married couple families (6 percent). Of the 1.8 million elderly poor who lived alone in 1999, about 1.5 million were women. Aged African-Americans and Hispanics females living alone are most at risk for living in poverty. In 1999, almost 58 percent of aged Hispanic females living alone were in poverty, while 44 percent of aged African-American females were in poverty. Individuals who fall into more than one group with higher poverty rates are especially at risk of poverty. For example, in 1998, 56 percent of unmarried black females aged 85 and older were poor. Over 60 percent of unmarried Hispanic females aged 75 to 84 were poor. In contrast, 21 percent of white females aged 65 to 74 were poor, and poverty rates for the male counterparts for each category were either less or there were too few cases available to make an assessment. Social Security provides an important source of income for the disabled. In 1999, disabled workers made up 11 percent of all OASDI beneficiaries. As with the elderly, Social Security is a major component (38 percent) of family income for disabled worker families. Also, 48 percent of disabled worker families get half of their income or more from Social Security, while 6 percent have no other income. Unlike the elderly, however, earnings are an equally large source of family income (38 percent) for disabled worker families. At 19 percent, poverty rates are nearly twice as high for the disabled as for the elderly. Still, like the elderly, poverty rates for disabled workers are higher for women, minorities, unmarried persons, and those living alone. Of all disabled beneficiaries, 23 percent of females were poor compared with 15 percent of men. Fifteen percent of disabled beneficiaries were white, 31 percent were black, and 26 percent were Hispanic. Only 12 percent of the disabled who lived with relatives lived in poverty, compared with 35 percent who did not. Ten percent of disabled workers who were married lived in poverty, compared with 27 percent who were not. Disabled workers who were widowed, never married or divorced experienced poverty rates of 30, 25, and 24 percent, respectively. The outlook for future Social Security benefit levels and thus their effect on income adequacy generally will depend on how the program’s long- term financing imbalance is addressed, as well as on the measures used. To illustrate the range of possible outcomes, we developed benchmark policy scenarios that either only increase taxes or only reduce benefits. Even without new benefit reductions, our analysis shows that replacement rates could decrease as the program’s full retirement age gradually continues to increase under current law, depending on the retirement decisions of future retirees. However, even with those reductions, our analysis shows that the adequacy of retirement income would improve markedly using one adequacy standard but change very little using another. Future benefit levels will also depend on the extent and nature of any benefit reductions. More progressive approaches to benefit reductions would result in greater adequacy for lower-earning beneficiaries. In turn, adequacy for various subgroups of beneficiaries would depend in turn on the earnings levels typical of those subgroups. Moreover, the adequacy of total incomes will depend on how individuals adjust their retirement planning in reaction to any program changes and on what happens to other sources of cash and noncash income. In particular, Medicare also faces serious long-term financing problems. However, our analysis does not reflect interactions with other income sources but focuses on the effects of changes in Social Security benefits, holding all else equal. To illustrate a full range of outcomes that might result from alternative approaches to restoring long-term solvency, we developed hypothetical benchmark policy scenarios that would restore solvency over the next 75 years either by only increasing payroll taxes or by only reducing benefits. Our tax-increase-only benchmark simulates “promised benefits,” or those benefits defined under current law, while our benefit-reduction-only benchmarks simulate “funded benefits,” or those benefits for which currently scheduled revenues are projected to be sufficient. These benchmarks used the program’s current benefit structure and the 2001 OASDI Trustees’ intermediate, or best-estimate, assumptions. The benefit reductions are phased in between 2005 and 2035 to strike a balance between the size of the incremental reductions each year and the size of the ultimate reduction. At our request, SSA actuaries scored our benchmark policies and determined the parameters for each that would achieve 75-year solvency. Table 1 summarizes our benchmark policy scenarios. For our benefit reduction scenarios, the actuaries determined these parameters assuming that disabled and survivor benefits would be reduced on the same basis as retired worker and dependent benefits. If disabled and survivor benefits were not reduced at all, reductions in other benefits would be deeper than shown in this analysis. (See app. III for more on our benchmark policy scenarios.) We then modeled future benefit levels with these benchmarks and calculated a variety of measures to look at income adequacy. However, we did not examine any measures of individual equity, such as rates of return, which any of our benchmark policies would also affect. We examined adequacy measures for illustrative workers with different steady lifetime earnings histories, for the entire beneficiary population, and also for different subgroups. To look at representative samples for the beneficiary population and subgroups, we used both SSA’s MINT model and the Policy Simulation Group’s GEMINI model. The MINT model allows us to look at total retirement income in 2020 across different age groups and races while the GEMINI model allows us to focus on specific birth cohorts reaching age 62 in various years, which we selected to look at long-term trends. As with any such simulation models, these models simulate income using a combination of historical data from small samples of the population and a variety of assumptions about future trends. At their best, such models can only provide very rough estimates of future incomes. Still, they can provide valuable comparisons over time and across alternative policy scenarios, holding all else equal. Thus, any analysis should focus on such comparisons rather than on the literal values of the estimates. (See app. IV for more on our modeling analyses.) Our tax-increase-only benchmark illustrates that monthly benefit levels could already decrease as the program’s full retirement age increases under current law, depending on the retirement decisions of future retirees. In turn, replacement rates would decrease by the same proportion because they are defined as the annual benefit amount divided by the last year of earnings. Figure 7 shows future replacement rates under our tax- increase-only benchmark for a range of illustrative retired workers. The full retirement age is the age at which full benefits are paid and historically has been age 65. Under current law, the full retirement age is gradually increasing, beginning with retirees born in 1938, and will reach 67 for those born in 1960 or later. For workers who retire at a given age, an increase in the full retirement age reduces monthly benefits because the actuarial reduction for early retirement increases. For example, for workers who will face a full retirement age of 67 and retire early at 65, monthly benefits will be reduced actuarially by 13.3 percent while their benefits would not have been reduced at all if the full retirement age had been kept at 65. Moreover, the 13.3 percent reduction applies to such workers equally at all earnings levels. As a result, increasing the full retirement age from 65 to 67 implies that replacement rates for illustrative low earners would decline from 57 to 49 percent while for illustrative high earners they would decline from 35 to 30 percent. Therefore, under such a proportional reduction, lower earners face a larger percentage-point reduction than higher earners. Still, the effect of such reductions would be diminished to the extent that workers choose to retire later than today’s workers do. While replacement rate analysis suggests that income adequacy will decline in the future, other ways of assessing adequacy suggest that it will change little or even improve dramatically. The GEMINI model allows us to illustrate this point best by showing changes over long periods of time. Using our tax-increase-only benchmark policy, we calculated the percentage of retired workers with Social Security benefits that fall below two different adequacy standards—the official poverty threshold and one- half median income. The official poverty threshold is adjusted each year to reflect inflation. In contrast, our simulation assumes that the one-half median income threshold will grow at the same rate as Social Security’s Average Wage Index, since wages are the largest component of family income. Figure 8 shows that the percentage of retired workers with benefits below the poverty threshold drops dramatically over time while the percentage with benefits below one-half median income changes very little. The difference in these percentage measures simply reflects differences in the assumptions underlying each adequacy standard. Since initial Social Security benefits are designed to increase with wages, and wages are assumed to grow faster than prices, benefit levels will grow faster than an adequacy standard that grows only by prices. In contrast, benefits will grow at roughly the same rate as a standard that grows by wages. In a fashion similar to poverty rates, dependency rates would also decline relatively rapidly because they focus on SSI benefit standards that increase with prices, not wages. Future benefit levels and income adequacy will also depend considerably on how any benefit reductions are made. Figure 9 shows that the percentage of retired workers with Social Security benefits below the official poverty threshold would be greater under a proportional benefit reduction approach than under a progressive benefit reduction approach. The difference between the two approaches grows slightly over time. The proportional benefit-reduction-only benchmark would reduce benefits by the same proportion for all beneficiaries born in the same year. The progressive benefit-reduction-only benchmark would reduce benefits by a smaller proportion for lower earners and a higher proportion for higher earners. Both benefit reductions benchmark policies would be phased in gradually from 2005 to 2035. The tax-increase-only (no benefit reduction) benchmark estimates are shown for reference. Also, the figure shows that percentage of workers with benefits below the poverty threshold would be slightly higher in our simulations for those retiring in 2032 rather than 2017. This reflects primarily that the benefit reductions in our benchmarks are more fully phased in for the 2032 group. The declines in the percentages from the 2032 to 2047 retirement years largely reflects the effects of the disparity between growth in wages and prices, as illustrated earlier; since the benefit reductions are fully phased in by 2035, the last two age groups experience nearly the same benefit reductions. Percent of cohort with Social Security benefits below poverty 1955 (2017) 1970 (2032) 1985 (2047) Birth year (age 62 year) The differences in adequacy estimates across benefit-reduction scenarios reflect how different benefit reduction approaches will have different effects on workers with different earnings. Lower earners have benefits that are closer to the poverty threshold than higher earners, so a progressive approach to reducing benefits would decrease the chances that lower earners’ benefits fall below that threshold. Figure 10 illustrates how different benefit reduction approaches would produce benefit reductions that would vary by benefit levels. The proportional benefit- reduction benchmark results in identical percentage benefit reductions, while two alternative, progressive benefit-reduction benchmarks would result in smaller reductions for lower earners and larger reductions for higher earners. The so-called “limited-proportional” benefit-reduction benchmark would be even more progressive than the progressive benefit- reduction benchmark because a portion of benefits below a certain level are protected from any reductions while reductions above that level are proportional. The 1985 birth cohort will be subject to the largest benefit reductions of the four cohorts we simulated; therefore, it best illustrates the potential disparity in benefit reductions by benefit level. The different benefit reduction approaches would have different effects on various subgroups of beneficiaries because of the differences in the lifetime earnings levels that are typical of those groups. Women, minorities, and never married individuals all tend to have lower lifetime earnings than men, whites, and married beneficiaries, respectively. Figure 11 shows how future poverty rates mirror these patterns. Moreover, it illustrates again how more progressive benefit-reduction approaches would result in lower poverty rates for these groups in particular. In this case, we present our analysis using SSA’s MINT model because it allows us to examine different races. However, these estimates for the year 2020 reflect benefit reductions that are not fully phased in as well as benefits for beneficiaries from many birth cohorts who will be subject to various levels of the phased-in benefit reductions. For later beneficiaries with fully phased-in benefit reductions, poverty rates could be higher. Tax-increase-only Very old beneficiaries are another subgroup that has tended to be at higher than average risk of poverty. Several factors relating to multiple sources of income have contributed to this risk, and many of these factors can be expected to have similar effects in the future. As people get older, they may spend down their retirement savings, especially as health and long- term-care costs mount up, and they are less likely to work. Also, they are more likely to be widowed. For a couple receiving one retired worker benefit and one spouse benefit, the household’s Social Security benefits would fall by 33 percent when either is widowed. For a couple in which both spouses receive retired worker benefits on their own earnings records, household benefits could fall by as much as 50 percent when either is widowed. In addition, widows might lose employer-sponsored pension benefits, which would happen if their spouse elected a self-only annuity instead of a joint-and-survivor annuity. Also, while Social Security benefits increase each year to reflect inflation, not all employer- sponsored pension benefits do. Of these various factors, all could affect future retirees, though employer pensions have been changing in design. Based on our review, reducing dependency on public assistance appears to have been the primary objective of the Social Security program. While many have noted the importance that Social Security plays in helping ensure adequate incomes for its beneficiaries, the Congress has never explicitly defined the term “adequacy.” In the end, setting benefit levels to address the adequacy issue will always be, as it has always been, a policy decision for the Congress. Still, income adequacy is only one of several criteria to consider in an overall evaluation of comprehensive Social Security reform proposals. Specifically, income adequacy should be balanced against individual equity, or the extent to which benefits are proportional to contributions. Other criteria include the extent to which proposals achieve sustainable solvency, how they would affect the economy and the federal budget, and how readily changes could be implemented, administered, and explained to the public. Current demographic trends confront us with a reality that cannot be ignored. If people will be living longer, then maintaining today’s levels of monthly benefits for all beneficiaries would require either more revenues, from whatever sources, or would require that workers wait longer to collect them. The other alternative of reducing monthly benefits would tend to diminish income adequacy for beneficiaries. However, our analysis shows that more progressive approaches to reducing monthly benefits would have a smaller effect on poverty rates, for example, than less progressive approaches. Also, reductions that protect benefits for survivors, disabled workers, and the very old would help minimize reductions to income adequacy, though they would place other beneficiaries at greater risk of poverty. More broadly, the choices the Congress will make to restore Social Security’s long-term solvency and sustainability will critically determine the distributional effects of the program, both within and across generations. In turn, those distributional effects will determine how well Social Security continues to help ensure income adequacy across the population. As our analysis has also shown, the effects of some reform options parallel those of benefit reductions made through the benefit formula, and those parallels provide insights into the distributional effects of those reform options. For example, if workers were to retire at a given age, an increase in Social Security’s full retirement age results in a reduction in monthly benefits; moreover, that benefit reduction would be a proportional, not a progressive reduction. Another example would be indexing the benefit formula to prices instead of wages. Such a revision would also be a proportional reduction, in effect, because all earnings levels would be treated the same under such an approach. In addition, holding all else equal, such an approach would implicitly result in future poverty rates that would be close to today’s rates instead of falling as they would with the current benefit formula. Therefore, in finding ways to restore Social Security’s long-term solvency and sustainability, the Congress will address a key question, whether explicitly or implicitly: What purpose does it want Social Security to serve in the future? to minimize the need for means-tested public assistance programs; to minimize poverty; using what standard of poverty; to replace pre-retirement earnings; to maintain a certain standard of living; or to preserve purchasing power? The answer to this question will help identify which measures of income adequacy are most relevant to examine. It will also help focus how options for reform should be shaped and evaluated. Our analysis has illustrated how the future outlook depends on both the measures used and the shape of reform. While the Congress must ultimately define Social Security’s purpose, our analysis provides tools that inform its deliberations. Still, changes to benefit levels would typically only be part of a larger reform package, and Social Security is only one part of a much larger picture. As we have said in the past, reform proposals should be evaluated as packages that strike a balance among their component parts. Furthermore, Social Security is only one source of income and only one of several programs that help support the standard of living of our retired and disabled populations. All sources of income and all of these programs should be considered together in confronting the demographic challenges we face. For example, changes to Social Security could potentially affect SSI benefits, employer-sponsored pensions, retirement savings, and the work and retirement patterns of older workers. Such interactions should actively be considered. Moreover, several programs provide noncash benefits that also play a major role in sustaining standards of living for their beneficiaries. Importantly, examining the adequacy of cash income alone would ignore the major role of noncash benefits and the needs they help support. This is especially critical in the case of Medicare beneficiaries. Considering these important noncash benefits in any adequacy analysis could have a very material effect on both the absolute and relative positions of senior citizens as compared to other groups of Americans. We provided a draft of this report to SSA. SSA provided a number of technical comments, which we have incorporated where appropriate. We are sending copies of this report to the Commissioner of the Social Security Administration and other interested parties. We will also make copies available to others on request. If you or your staff have any questions concerning this report, please call me on (202) 512-7215. Key contributors are listed in appendix V. Several methods have been used to measure the level of adequate income—what it costs to live. We identified 11 methods that have been used to develop measures against which income from Social Security benefits might be compared for determining adequacy. These methods include the current poverty thresholds, experimental poverty thresholds, family budgets, family expenditures, material hardship, median family income, one-half median family income, per capita personal income, public assistance, public opinion, and earnings replacement rates. These methods vary along a number of dimensions. These include their purpose, features of their construction, years for which they measure adequacy, and frequency of publication. In some instances where the method has been used to develop more than one measure, we selected one of the measures as an example of the method and used it for the description of the method. The methods also vary in whether they are absolute or relative. Absolute measures are derived from a fixed bundle of goods and services that does not vary in mix, quantity, or quality regardless of when or where it is applied. For example, an absolute measure would be one based on a list of goods and services that are judged to be necessary for a family to meet its basic needs. The list of goods and services would need to be changed periodically to reflect changes in living standards over time. In contrast, relative measures change with current income or consumption. Measurement experts who have served on various panels to study the issue have not agreed on which is more appropriate to determine how much it costs to live. Table 2 provides an overview of the 11 methods with regard to several dimensions. Table 2 is followed by a fuller description of each method, with particular attention to how each is constructed, its uses, and issues that panels and experts have raised regarding the measure. The poverty thresholds are a measure that attempts to specify the minimum money income that could support an average family of a given composition at the lowest level of living consistent with a country’s prevailing standards of living. The poverty thresholds are an absolute measure whose initial purpose was to measure year-to-year changes in the number and characteristics of poor people. The poverty thresholds, as originally published by the Social Security Administration (SSA) in 1963, represent a minimal amount of funds a family needed to rear its children, what the author termed “crude indexes” of poverty. Later the crude indexes were extended to families without children. If a family’s total money income is less than the poverty threshold for that family’s composition, which is based on family size, age of the family’s head, and number of children under 18 years old, then that family, and every individual in it, is considered poor. In 1965, the Office of Economic Opportunity adopted the thresholds for statistical and program planning purposes. The Bureau of the Budget established the thresholds as the official definition of poverty for statistical use in all executive departments in 1969. This definition was reconfirmed in Statistical Policy Directive No. 14, after the bureau became the Office of Management and Budget. Poverty thresholds are used mainly for statistical purposes, such as estimating the number of Americans in poverty each year. This official measure of poverty is used to measure the nation’s progress in reducing the extent of poverty and is used to allocate funds and to identify target populations for various public assistance programs. Policymakers use trends in poverty rates—the proportion of persons whose family income is below the poverty threshold—over time and across population groups to make judgments about particular policies. Poverty statistics are also used to evaluate government programs for low-income persons and the effects of policies on the distribution of income. SSA’s 1963 publication based the poverty thresholds on information from a 1955 food consumption survey and the 1964 costs of a food plan. The author determined from U.S. Department of Agriculture’s (USDA) 1955 Household Food Consumption Survey that families of three or more people spent approximately one-third of their after-tax money income on food. The author then tripled the 1964 costs of USDA’s economy food plan for various compositions of families. Different procedures were used to calculate poverty thresholds for two-person families and single individuals. Separate thresholds were estimated for single individuals and 2-person families headed by an individual 65 years and over, as well as an individual under 65 years old. There were separate sets of thresholds for farm and nonfarm families, as well as thresholds by sex of the head of the family. The thresholds that were based on the sex of the family’s head and by farm residence were eliminated in 1981. There is no geographic variation of the poverty thresholds. Although there were regional costs for the USDA food plan, they were not used to account for regional variation when the poverty thresholds were developed. Two methods have been used to update the original poverty thresholds. Initially, the change in the cost of USDA’s economy food plan was used to annually update the poverty thresholds. In 1969, the method of updating the thresholds was changed to price changes of all items in the Consumer Price Index (CPI). The poverty thresholds are increased each year by the same percentage as the annual average CPI for all Urban Consumers (CPI-U). The Census Bureau annually updates and publishes the poverty thresholds. Numerous alternative poverty thresholds have been proposed since the official adoption of the measure developed in 1963. One such alternative is the experimental thresholds recommended by a Committee on National Statistics of the National Academy of Science (NAS) study panel in 1995. The NAS poverty threshold is a relative measure whose stated purpose was “to lead to an initial threshold that is reasonable for purposes of deriving poverty statistics.” The NAS poverty thresholds have been solely used for research. Census published a report in 1999 to provide information for evaluating the implications of many of the NAS panel’s recommendations for a new poverty measure. To do so, Census reported how estimated levels of poverty for 1990 through 1997 differed from official levels as specific recommendations of the NAS panel are implemented individually and how estimated trends differed when many recommendations are implemented simultaneously. The NAS poverty thresholds represent a dollar amount for basic goods and services—food, clothing, shelter (including utilities)—and a small additional amount to allow for other common, everyday needs (e.g., household supplies, personal care, and nonwork-related transportation). First, to develop a threshold for a reference family, a specified percentage of median annual expenditures from the Consumer Expenditure Survey (CEX) data is used to determine an amount of food, clothing, shelter expenditures. The reference family consists of two adults and two children. The median annual expenditure amount is next increased by a modest additional amount to allow for other necessities. An equivalence scale is then applied to the reference family threshold to adjust for families of different sizes and composition. Further adjustments are made to account for geographic differences in the cost of housing. The NAS panel developed an index of 41 geographic areas that is presented by area and population size. These index values are applied to the thresholds to adjust for differences in the cost of housing. The NAS panel also recommended a method for updating the initial threshold that would reflect changes in nominal growth in food, clothing, housing, and shelter expenditures. To do so, 3 years of the most recent data from the CEX would be used to determine the threshold for the reference family. The CPI-U would be used to update these expenditure data to the current period. Then, the procedures as outlined above are followed to estimate thresholds for families of other sizes by geographic areas. The NAS panel said that its method of updating the thresholds represented a middle ground between an absolute approach of simply updating the thresholds for price changes, which ignores changes in living standards over time, and a relative approach of updating the thresholds for changes in total consumption. One of the NAS panel members dissented from the panel because the major recommendations and conclusions for changing the measurement of poverty were the “outcome of highly subjective judgments” and were not based on scientific evidence. In his dissent, the member said that there was no scientific basis to support the use of food, clothing, and shelter expenditures upon which to develop the thresholds. He also objected to using the median level of expenditures of these items rather than the CPI to update the poverty thresholds; he said to do so would change the measure from an absolute to a relative measure. He had two other objections in that the NAS panel did not treat medical care as basic service and that the panel suggested that the poverty line fell within a range of values, of which he stated did not have the scientific community’s consensus. Family budgets are an income adequacy measure that dates back to the 19th century. The measure described in this appendix is for the city worker’s family budget, whose origins closely relate to the budgets that the Works Progress Administration constructed in 1935 for a urban family of four. The city worker’s family budget represents the estimated cost of a list of goods and services that the 4-person family would need to live at a designated level of well-being. The level designated in the city worker’s family budget for 1946 was intended to represent a modest but adequate standard of living. The same level of well-being was used in the interim city worker’s family budget with 1959 costs. In the mid-1960s, two levels of well-being were added—lower and higher—and the name of the modest but adequate level was changed to intermediate. Also, the name of the city worker’s family budget was changed to family budgets. The city worker’s family budget was an absolute measure that was used to determine the adequacy of income—what it costs to live—for a city worker’s family who was defined as a husband, aged 38 and employed full time; a wife who did not work outside the home; a boy aged 13; and a girl aged 8. The city worker’s family budgets were used as benchmarks in determining individual family needs, establishing interarea differences in living costs, and documenting changes in living standards over time. The budget cost levels were used by federal, state, and local governments as thresholds for eligibility in administrative programs. The city worker’s family budgets were widely used in employment compensation determinations, such as wage negotiations and geographic wage adjustments. Since the costs of the budgets were city specific, the budgets were also used to construct indexes of living costs. These indexes showed interarea variations in living costs and individuals and financial planners used them to examine interarea cost-of-living differences. The budgets were also used in private and public legal actions. Researchers continue to construct family budgets to examine the adequacy of Social Security benefits, as well as the adequacy of wages paid to single parents. In a number of countries, budget standards are used as reference points in devising or monitoring income maintenance programs. For example, the Commonwealth Department of Social Security commissioned the development of a set of budget standards for Australia. Published in 1998, the budget standards are expected to inform future Australian governments in relation to adequacy standards. The measure involved the formulation of a budget, listing the items and their quantities that comprised the level of well-being chosen, the pricing of these items, and computing the aggregate annual cost of the budget. A group of experts developed a list of goods and services using scientific standards of requirements, such as the recommendations of the Committee on Nutrition of the National Research Council for the food segment of the budget. Where standards had not been developed for the various segments of the budget, records of family expenditures by 4- person families were used. These data were studied to determine the level of purchases in expenditure categories where the families began to purchase higher quality items in the same expenditure category of items or started to save their income. BLS published the costs of the city worker’s family budget for 34 cities for 1946, 1947, 1949, 1950, and 1951. The cost of the interim city worker’s family budget was published for 20 cities for 1959. The family budgets at three cost levels were published for 1967 through 1981 (the cost of the intermediate level was also published for 1966) for urban United States, 40 individual metropolitan areas, and 4 nonmetropolitan regions. The early city worker’s family budgets could not be used for families other than those consisting of a husband, wife, and two young children. In 1960, BLS published equivalence scales that could be used to adjust the costs by family composition. BLS updated the equivalence scales in 1968. The city work’s family budgets are no longer published. With the release of the 1981 budget costs, BLS terminated the family budgets program because funding was not available for a revision. In addition to re-specifying the lists of items in a revision, two methods were used to update the city worker’s family budget costs. The first method recollected price data for the individual items on the budget list and then aggregated those costs for an annual amount. The other method, which was used to estimate the 1949 through 1951 and the 1969 through 1981 costs, was to use the CPI’s component index numbers to update the costs for the segments of the budgets. Revisions of the budgets occurred in 1959 and 1966 when the lists of goods and services were re-specified by experts to account for changes in the modest but adequate standard of living. In response to a congressional mandate and in recognition that the family budgets needed to be improved, in the 1970s, BLS contracted with the Wisconsin Institute for Research on Poverty to recommend revisions in the Family Budgets program. In 1980, the Expert Committee on Family Budget Revisions recommended that the methodology be changed and that scientific standards no longer be used. The committee asserted that a scientific basis does not exist by which to develop commodity-based lists for the budgets. One of the reasons the Expert Committee on Family Budget Revisions recommended a change in methodology was that it found that large elements of relativity and subjective judgment entered into the development of the lists of goods and services, including those for which scientific standards were used. The committee recommended that actual overall levels of expenditures be used to measure adequacy. Specifically, it recommended that median expenditure of two-parent families with two children be used to develop the “prevailing family standard” budget and that three other standard budgets be developed as proportions of the prevailing family standard budget amount. In a dissent, a committee member said that a measure of well-being that uses an average (or median) of total family expenditures, which is obtained from a consumer expenditure survey, does not take into consideration the specifics of what that amount will buy or whether the actual quantities of goods and services available within the amount are enough to supply what is needed. Family expenditures are the averages of consumer purchases that are recorded in survey data arrayed by family characteristics, such as age of reference person. Family expenditures is a relative measure whose purpose is to describe consumer spending and to determine cost-of-living indexes. The basic premise is that the living standards of society can be measured with current consumption expenditure levels and patterns. The early family expenditure surveys, which were conducted in the late 19th century, were concerned with the cost of living of the “working man” and his family, that is the amount of dollars a family needed to live. Family expenditure data are used by government and private agencies to study the welfare of particular segments of the population. The data are used by economic policymakers interested in the effects of policy changes on various groups. CEX data are used to estimate aggregate family expenditures. There are three basic methods to measure family expenditures: current consumption, used in the CEX before 1980; total expenditures, used in the CEX since 1980; and current outlays, an alternative measure used to approximate out-of-pocket expenditures, which is also used in the CEX since 1980. Current consumption expenditures method includes the transaction costs of goods and services, excise and sales taxes, the price of durables (e.g., vehicles) at the time when the purchased, and home mortgage interest payments. It excludes the payment of principal on loans, gifts to persons outside the family, personal insurance, and retirement and pension payments. The total expenditures method is the same as the current consumption expenditures method, except it includes gifts, personal insurance, and retirement and pension payments. The total outlays method differs from total expenditures in that payments of principal for home mortgages and financed vehicles are included and the purchase price of vehicles is excluded. Data from the continuing CEX have been collected quarterly on an ongoing basis since 1980. Prior to the continuing CEX, the survey was conducted periodically about once every 10 or so years. BLS annually publishes average annual expenditures from the continuing CEX for consumer units. Expenditure data are published by type of area (urban and rural) and for four regions of residence. According to BLS, the published expenditure amounts are averages for consumer units with specified characteristics, regardless of whether or not a particular consumer unit purchased an item in the expenditure category during data collection. Therefore, the average expenditure for an item may be considerably lower than the average for those who actually purchased the item. Also, the average may differ from those who purchased the item as a result of frequency of purchase or the characteristics of the consumer units that purchased the item. For example, since all consumer units do not purchase a new vehicle every year, the average expenditure for new vehicles will be lower than the average for those who actually purchased a new vehicle because the average expenditure includes those who did not purchase a new vehicle that year. Even among those who purchase the item, consumer units may have dissimilar demographic characteristics. Material hardship measures identify individuals who do not consume minimal levels of goods and services, such as food, housing, clothing, and medical care. The material hardship measure presented here is one developed in the 1980s by Susan Mayer and Christopher Jencks in their study of Chicago residents. This material hardship measure focused on the following hardships: hunger, cut off of utilities to the home, living in crowded or dilapidated housing, eviction, inadequate health care, and unmet needs for dental care. Material hardship is a measure whose purpose is to provide a means for policymakers to measure the goal of reducing specific forms of material hardship. Researchers have used material hardship measures to supplement traditional measures of poverty, such as to provide a nonmonetary perspective of those who are experiencing economic difficulties. The measures are used by researchers to create point-in-time estimates of hardship, describe trends in hardship, identify predictors of hardship, and develop hardship indicators to evaluate welfare reform. Respondents are asked to make self-assessments of specific events in their lives. For example, they are asked if there was a time in the previous year when they needed food but could not afford to buy it or could not get out of the home to get food. Generally asked in a yes/no format, these indicators are reported individually but are then summed into a composite deprivation index. In some instances, respondents are asked to report the hardship on the basis of a scale. For example, respondents might be asked to categorize the food eaten in their household as (1) having enough of the kinds of food they want, (2) enough but not always the kinds they want, (3) sometimes not enough to eat, or (4) often not having enough to eat. Other than periodically conducting the surveys, there is no method to update the material hardship measure. Until Census began collecting data from a nationally representative sample, data had been collected of single mothers in Chicago, Illinois, and of selective populations in other cities. Median income is the amount which divides an income distribution into two equal groups, half having incomes above the median and half having income below the median. The concept of using the midpoint of the income distribution as an adequacy measure is that people are social beings and that full participation within society requires that they “fit in” with others. Individuals are not able to participate fully in society if their resources are significantly below the resources of their members of society, even if they are able to eat and physically survive. Median family income is a relative measure whose purpose is to estimate the income of the family at the middle of an income distribution. Researchers, analysts, and policymakers use median family income to follow historical trends and annual changes in income. A relative measure, such as median family income, is used to provide a perspective of an adequacy measure that keeps up to date with overall economic changes in the society. Current Population Survey (CPS) data are used to calculate median family income. The measure is updated annually through data collection. The median is based on money income before taxes and does not include the value of noncash benefits, such as food stamps, Medicare, Medicaid, public or subsidized housing, and employment-based fringe benefits. The Census Bureau has annually published median family income since 1947. Median family income data are published by various family characteristics. The data are also presented by four regions of residence and by type of residence—inside or outside metropolitan areas. The metropolitan areas are further broken down by over or under 1 million in population and by inside or outside central cities. One-half of median family income (see the previous method for description of median family income) is a relative poverty standard. One-half median family income is a relative measure that researchers use to demonstrate the absolute nature of the official poverty thresholds.One-half of median income for four-person families is also used in comparative analyses of poverty across nations. Researchers use one-half of the value of median family income as the measure. No standard method is used to establish the measure of a minimal level of adequacy with median family income. The most commonly proposed measure used for poverty determination is 50 percent of the median. The standard could be implemented in several ways, for example, one-half of the median for each family size. However, median income by family size is bell shaped with the peak at the four-person family. Per capita personal income is the amount of personal income from the U. S. national income and product accounts (NIPA) that would be available to each individual if all income received by persons was distributed equally among all people in the nation. Per capita personal income is a relative measure whose purpose is to present a measure of a nation’s personal income on a per person basis. Government and private decision makers, researchers, and the public at large who need timely, comprehensive, and reliable estimates use per capita personal income as a measure of the value of and changes in average income at the national and regional level. Because per capita personal income is conceptually and statistically consistent with the official measure of output (Gross Domestic Product), productivity, and other key economic indicators, national estimates of per capita personal income are key inputs to the formulation and monitoring of economic activity by the Federal Reserve Board and to the preparation of projections of federal receipts by the Congressional Budget Office. Regional level estimates, which are consistent with the national estimates, also are used by state governments for similar purposes and are used in the allocation of federal funds for key programs. Per capita personal income data are used as a measure of the economy’s capacity to pay. For example, the Medicaid funding formula uses state per capita personal income to provide higher matching percentages for states that have more limited resources to finance program benefits and more low-income people to serve. Personal income is calculated as the sum of incomes received by persons from production and from transfer payments from government and business. “Persons” consists of individuals, nonprofit institutions that primarily serve individuals, private noninsured welfare funds, and private trust funds. Wage and salary disbursements, other labor income, proprietors’ income, rental income, dividend income, interest income, and transfer payments to persons, less personal contributions for social insurance are summed to calculate personal income. In most cases, only market transactions are used. In a few cases, nonmarket transactions are used in personal income. These transactions include home ownership, financial services furnished without direct payment, and employer contributions for health and life insurance. The summation of the personal income components is then divided by the nation’s population to provide per capita personal income. Population is the total population of the United States, including military personnel. Each component of personal income is prepared independently using the most up-to-date and reliable source data. The Commerce Department’s Bureau of Economic Analysis prepares the estimates of personal income and calculates per capita personal income. Per capita personal income estimates are released monthly at the national level, quarterly at the state level, and annually at the county and metropolitan area levels. Per capita personal income is published at both the national and regional—state, county, and metropolitan area—levels. The base of per capita personal income, personal income, is updated on a regularly scheduled basis, where the schedule of updates are timed to incorporate newly available and revised source data. Comprehensive revisions are carried out at about 5-year intervals. Population estimates are revised to reflect the results of the latest decennial census of population. The definition of personal income, which is based on the NIPA definition, is not what one usually equates to family or household income. For example, it includes income of “persons” as defined for the NIPAs, which includes income of individuals as well as income of nonprofit institutions serving individuals and the investment income of pension plans. It excludes realized capital gains or losses and incomes that reflect transfers from other individuals, such as alimony or gifts. Although, in general, incomes are recorded when received, benefit payments from pension plans are not included when the benefits are actually paid. Instead, employer contributions to these plans are recorded as income to employees when the contributions are made and the investment income of the plans is recorded when earned. Also, although Social Security benefit payments are included in personal income, total personal income is reduced by personal contributions to Social Security. If an individual is dependent upon others for cash assistance, then the individual has inadequate income. Since data about sources of income provided by others is difficult to obtain, statistical indicators of such dependency often resort to administrative data from public assistance programs. As used in this appendix, the receipt of public assistance is a measure to denote individuals who meet program eligibility criteria and have resources below a level that is specified by a state (or federal government) for its public assistance program. The dependency on others appears to be the basis on which President Roosevelt’s Committee on Economic Security made its recommendations in 1935. Supporting materials prepared for the committee indicate that it used a “danger line” amount that was used in some of the states for their old-age public assistance programs. The danger line was an amount ($300 per year) that placed older persons in a dependent class. As an example of this adequacy measurement method we use the Supplement Security Income (SSI) program and its predecessor the old-age assistance program. The federal SSI program was created to provide a positive assurance that the nation’s aged, blind, and disabled people would no longer have to subsist on below poverty-level incomes. SSI was conceived as a guaranteed minimum income for the aged, blind, and disabled. It was to supplement the Social Security program and to provide for those who were not covered or minimally covered under Social Security or who had earned only a minimal entitlement under the program. In 1972, SSI replaced the federal-state old-age assistance programs in which state benefit amounts were matched by the federal government up to a specified monthly amount. Under those programs the states were able to set benefit amounts and the basis for those amounts was unclear. The purpose of a measure that examines the receipt of public assistance is to determine if the person is dependent upon others for his/her economic well-being. In staff reports prepared for the 1934 Committee on Economic Security, the dependency on others is used as a measure of inadequate income. For example, Edwin Witte, executive director for the committee, estimated that 2.7 million of the 6.5 million persons 65 and older were supported by others, including those who obtained public assistance. The National Resources Planning Board in 1942 used the receipt of public assistance to determine whether old-age and survivors benefits that were payable in 1940 were adequate for the needs of the recipients. The board said that a large volume of supplementation of social insurance benefits by other forms of aid would lead it to conclude that insurance payments were not adequate for a considerable proportion of qualified workers. The measure is simply the number of persons who receive public assistance. SSA administrative data are used to determine the number of persons who receive federally administered SSI benefits. The number of SSI recipients is continually updated with administrative data. SSA publishes the data quarterly and annually. The data are published for the United States and by state. By the nature of SSI’s benefit structure and eligibility criteria, administrative data can be used to identify the type of family unit, or lack of, in which the recipients live. For example, there are different benefit levels for couples, individuals living alone, recipient living in someone’s household, or individuals in a Medicaid facility. Public opinion polls have been used to solicit subjective estimates from individuals on the amount of income that one needs to live. The concept underlying a public opinion poll to ascertain a subjective measure of adequacy is that individuals are able to tell a pollster what the minimum amount of income (or consumption) is that people need to maintain a minimally adequate level of living. Subjective measures of adequacy are grounded in the everyday and necessarily subjective perceptions of typical individuals as to the material requirements associated with differing levels of economic well-being. The direct question approach is based on the assumption that people are the experts on the needs of their families and/or those living in their communities. The only relatively consistent series of money amounts corresponding to a living-standard threshold based on judgment of representative samples of the public is one developed by the Gallup polling organization. The subjective measure presented here is the “get-along” measure that was collected by the Gallop Organization. The purpose of a subjective measure of well-being that has been obtained through a public opinion poll is to track the size of groups enjoying different standards of living. To do so, the societal views about the income levels required to support alternative living levels are compared with average levels of family economic resources. The primary use of the subjective measure has been to demonstrate the absolute nature of the official poverty thresholds. For example, the Committee on National Statistics of the National Academy of Science study panel and researchers compared trends in the official poverty threshold, one-half of family median income, and the get-along amount to document that the official poverty measure is no longer consistent with the society’s definition of measures of need. Subjective measures have also been used to produce subjective minimum income thresholds. The responses to the following question are used as the subjective measure: “What is the smallest amount of money a family for four (husband, wife, and two children) needs each week to get along in this community?” The response when converted into an annual amount is generally referred to as the “get along” amount. The Gallup Organization queried samples of adults about the get-along amount 38 times from 1946 through 1992. There was no regular publication of the data. Although the get-along question was asked in the context of the respondent’s community, no presentation has been made of geographic differences among the values reported. Other than periodically making an inquiry through a poll or survey, there is no method to update the public opinion measure. As part of a study of subjective assessments of economic well-being, researchers at the Bureau of Labor Statistics found that respondents have definite emotional reactions to their financial situations and are willing and able to discuss them. They also found that the terms used in subjective questions were ambiguous. In addition, if the respondent was the designated bill payer, the person’s responses were found to differ from those in the family who did not pay the bills. One common measure of retirement income adequacy is the replacement rate, which represents the income in retirement for a single worker or household in relation to a measure of pre-retirement earnings, such as earnings in the year before retirement. The purpose of the earnings replacement rate is to compare the level of retirement income with the level of pre-retirement income to help illustrate the extent to which pre-retirement standards of living can be sustained in retirement for particular individuals or households. The replacement rate is a relative measure in that it is relative to an individual’s or household’s own income, not to some absolute standard of adequacy. The earnings replacement rate has been used both with respect to Social Security and to employer-sponsored pensions. As noted in this report, the Social Security benefit formula is defined in a way that focuses on replacing earnings. When calculating replacement rates, SSA typically uses the ratio of initial Social Security benefits to pre-retirement covered earnings. A number of researchers have used replacement rates in analyzing Social Security benefits for many years. Also, an SSA actuarial note observes that “policymakers are interested in replacement ratios: (1) as a means of communicating to prospective beneficiaries approximately how much they can expect to receive from Social Security, relative to their earnings; and (2) as a means of deciding if and how the Social Security program should be changed to meet the needs and desires of the public…” Replacement rates have also been used with respect to employer- sponsored pensions and retirement income more broadly, using total income amounts in the ratio. For example, available data suggest that typical pension replacement rates for a 30-year career worker have been in the 20- to 40-percent range across the earnings distribution and that lower earners have received slightly higher replacement rates than higher earners. More generally, many benefit professionals currently consider a 70 to 80 percent replacement rate as adequate to preserve the pre- retirement living standard. In contrast, Social Security replacement rates for workers who retired in 2001 at age 65 with a history of average earnings had a replacement rate of roughly 40 percent. Construction of replacement rates raises a variety of methodological issues, most notably, how retirement income is measured, how pre- retirement income is measured, how the two are compared, and for whom. How these issues are addressed depends on the purpose at hand. For example, in measuring retirement income, some researchers feel that income in the first year of retirement should be used, rather than trying to reflect changes in retirement income over time. In measuring pre- retirement income, some researchers use income in the year immediately before retirement. In comparing the two, the two measures should be consistent with one another, for example, with respect to before- or after- tax status. For whom the comparison is made might include specific individuals or households for their own retirement planning purposes, illustrative workers such as the steady-earners used in figures 4 and 7 of this report, or some sample of individuals or households in the population. If the purpose of the analysis is to isolate the effects of certain program changes, then the use of illustrative steady earners in which all are assumed to retire at a given age might be appropriate. In contrast, if the purpose is to describe the experience of a population, then using a sample might be appropriate. The issues of updating and geographic variation are not especially applicable to the replacement rate by its nature. It is a ratio that is relative to the earnings of the individuals or households examined, which themselves change across cohorts. While replacement rates can be useful for some purposes, such as illustrating the effects of program changes over time, the meaning of a specific value of a replacement rate is not clear. For example, a very low earner could have a high replacement rate and still have very low income, while a high earner could have a low replacement rate and live quite comfortably. Thus, desired or target replacement rates can vary significantly by income level and other factors. Also, the standard that pension professionals consider an adequate replacement rate has changed over the years. While a 50 percent replacement rate might have been considered adequate in the 1930s, when Social Security was instituted, many benefit specialists and researchers would apply a higher standard today. Moreover, the actual experience of a given household could easily involve phased-in retirement or situations where one spouse retires while the other continues to work. Such irregularities present problems in interpreting replacement rates for actual households. We examined the characteristics of the 11 measures, which are described in appendix I, that might help examine income adequacy. Through this examination, we determined that each had limitations that precluded using any single measure by itself for our analyses. Given these limitations, we selected four measures that would, as a group, be more appropriate measures for our analyses. These are the current poverty thresholds, median family income, public assistance, and earnings replacement rates. Public assistance and earnings replacement rates reflect the concern that the framers of the Social Security Act had about dependency on others and a means to support people who no longer worked. The current poverty thresholds and median family income, respectively, provide a lower and upper bounds of the congressional expectation for Social Security to provide more than a minimal subsistence level, which is at a level above that estimated by the current poverty thresholds. We decided not to use three measures—family budgets, material hardship, and per capita personal income—because they were outdated or because they did not allow us to make the comparisons our analyses required. We elected not to use the family budgets measure because the database on which it was constructed was 40 years old and because it was no longer officially published. We elected not to use the material hardship measure because it produced a nonmonetary value that could not be compared to Social Security benefit amounts or income dollar amounts. We chose not to use per capita personal income because by definition it includes income other than that held by people, specifically, money income held by nonprofit institutions and pension plans. In examining the four measures we used, we determined that each had limitations that precluded using any single measure by itself. Below, we document the recognized limitations of each for use in our analyses. Several limitations have been identified regarding the use of current poverty thresholds for estimating the number of people who live in poverty each year. Some of these limitations were identified as a result of two federally sponsored studies in the 1970s and 1990s. Although these studies did not assess the thresholds as an adequacy measure, the limitations they identified shed light on the thresholds’ ability to identify those whom do not have the resources to meet subsistence or minimal needs. We also include concerns expressed by the developer of the current poverty thresholds. A 1976 Department of Health, Education, and Welfare (HEW) mandated study of poverty measures noted that several limitations stemmed from the fact that the current thresholds were based on one needs standard— food—and its costs in relation to other nonfood expenditures. The HEW study stated that other than food there were no other commonly accepted standards of need. In addition, it noted that the amount of money a family spends on food was only an approximation of a family’s food needs. The report also stated that the multiplier that was applied to the food costs was a rough measure of nonfood requirements. According to two federally sponsored studies, some of the limitations of the current poverty thresholds relate directly to their inability to reflect changes in living standards. The poverty thresholds are an absolute measure in which the mix of goods and services the thresholds represent has not been changed for nearly 40 years and, therefore, are not consistent with prevailing American standards of living. Although the current poverty thresholds are updated by price changes as reflected in the Consumer Price Index (CPI), as indicated in these two studies, the items that are updated reflect a mid-20th century mix in terms of quality and quantity of goods and services. The current poverty thresholds do not reflect how the proportion of income dedicated to food has changed with rising living standards, according to a 1995 study panel of Committee on National Statistics of the National Academy of Science (NAS). A research study illustrates how living standards based on food rise over time—as the population becomes more prosperous, on average, it devotes a smaller proportion to food expenditures and larger proportions to nonfood expenditures. The study recalculated the poverty thresholds using USDA’s 1965 Household Survey to determine the portion of family income dedicated to food purchases and USDA’s 1975 Thrifty Food Plan to approximate the cost of food. The thresholds re-estimated for 1977 were about 40 percent higher for the 1- person households and about 20 percent higher for 4-person families. A recent study estimated that the poverty thresholds for 4-person families would have been 68 percent higher in 1987 if they had been recalculated with methodology similar to that used to develop the current poverty thresholds. The 1976 HEW study and the 1995 NAS study panel noted that, although the current poverty thresholds are updated by changes in prices paid by consumers, they do not change with the standard of living. The 1995 NAS study panel said that the thresholds do not incorporate changes in total consumption that include spending on luxuries, as well as necessities, or declines in the standard of living. The 1976 HEW study noted, however, that the current poverty thresholds were updated using a relative means— changes in prices—using the CPI. However, the developer of the current poverty thresholds voiced concern about updating the thresholds with the CPI. She noted the uncertainty about the appropriateness of the CPI as a measure of price changes for the poor. She doubted that one price index could capture how families at different income levels adjust their spending to accommodate to price changes. For example, poor families may react to a 10 percent increase in the price of utilities by reducing expenses in other essential consumption areas; whereas wealthy families would have more options and could address the increase in a different manner. Another limitation of the current poverty thresholds identified by the 1995 NAS study panel is that the thresholds do not account for the fact that working families pay taxes on their earnings and families on public assistance do not pay taxes on the cash assistance they receive. According to the NAS panel, this occurs because the determination of whether or not a family is poor is based on a comparison of before-tax income with thresholds based on after-tax income. This comparison ignores the fact that payment of taxes lowers disposable income. As a result, the comparison of before-tax income with the current poverty thresholds can make it appear as if low-income working families are better off than poor families receiving public assistance. The NAS study panel indicated that this limitation might affect the manner in which policymakers view the poverty population. For example, because of the comparison of before-tax income with an after-tax poverty measure, the adverse effects of tax policy changes for low-income working families are not captured in the resulting poverty statistics. The NAS panel identified another limitation of the current poverty thresholds in that the value of noncash benefits, such as housing subsidies, are not included as income in the determination of poverty status. According to the panel, the extent of poverty among the recipients of such benefits is overstated and the efficacy of government income-support measures is understated because the current poverty thresholds do not take into account the receipt of noncash public benefits. According to the developer of the current poverty thresholds, the thresholds are inappropriately applied to all types of families. The developer stated that a major limitation of the thresholds was the failure to differentiate between a social minimum appropriate for a worker and his family and a more stringent standard appropriate for a family dependent on public assistance. She indicated that the same standard was inappropriately applied to both types of families. Furthermore, the developer and the NAS study panel said the current poverty thresholds inability to address needs that are specific to families with different living situations was a limitation. The NAS panel stated that the thresholds do not accurately portray the relative poverty status of working families with childcare expenses and those without such expenses. The developer also voiced concern about the tradeoffs that families make and cited the limitation of the thresholds to address, for example, how higher expenditures in health care affect other areas of family living. The NAS panel also said that the thresholds do not distinguish among the health care needs of different kinds of families or reflect the role of insurance coverage in reducing families’ medical care expenditures. According to the studies, the current poverty thresholds have limitations in the manner in which they differentiate for family size and do not account for geographic differences in the cost of living. The NAS panel questioned the equivalence scale adjustments for family size—especially thresholds for single persons and those for aged individuals and couples— because the composition of families and households has changed since the 1960s. Both the 1976 HEW study and the 1995 NAS report state that the thresholds are limited in that they do not adjust for interarea price differences and therefore do not incorporate geographic differences in the cost of living. Median family income has not been used in any official capacity. Therefore, only general observations have been documented about its limitations as an adequacy measure. Limitations are generally expressed in terms of using 50 percent of family median income as a measure of poverty status. According to one researcher in the field, one limitation concerns the public’s ability to understand the measure’s income base when it is accustomed to a measure based on basic needs. The researcher noted that an income-based measure was less closely linked to the basic concept of minimum adequacy than an absolute measure. In other words, the public would have difficulty grasping how it could be a measure of adequacy if it was not linked to one’s basic needs for food, clothing, and shelter. According to the NAS study panel, another limitation is that median family income changes directly with aggregate income and is difficult for people to understand its movement when the economy changes. One researcher said that a relative measure like median family income would fall in real terms during a recession and that this was less than ideal because the needs of the poor do not fall similarly. The 1995 NAS study panel also noted the behavior the measure would demonstrate during recessions and economic upturns and said it would be hard to explain and justify changes in the measure that are not simply a reflection of price changes. The researcher noted that opponents of a relative adequacy measure, such as median family income, say it presents too much of a moving target for policy assessment purposes and that it is unreasonable to judge the effectiveness of antipoverty efforts against such a measure. Limitations also revolve around how to implement median family income as an adequacy measure, according to the NAS panel. It noted the problems in selecting the median family income for a particular family size. The panel discussed several approaches that have been used to develop an adequacy measure and limitations of these approaches. For example, it noted that one approach is to apply an equivalence scale to the income amounts in order to develop a per capita equivalent income for the reference family. The panel noted that this approach was sensitive to the particular equivalence scale that was used. In this report, we used median family incomes by family size as published by the Census Bureau. For single individuals we used one-person household median income; for two persons we used two-person family median income. As noted in Ruggles, this approach also has its limitations in that median family income has a bell-shaped distribution peaking at the four-person family size. Another limitation the NAS panel identified concerned the definition and sources of income that are used to produce median family income. The NAS panel noted conceptual problems in using median income as an adequacy measure because it does not reflect disposable income in the way it handles taxes, childcare expenses, and other work-related expenses. The NAS panel also said that median family income does not include noncash benefits, such as food stamps, but said that is not much of a problem since families at the median do not generally receive such benefits. The receipt of public assistance has not been recently reviewed by a group of experts as an adequacy measure. Therefore, the limitations identified for this measure are those applicable to the federal-state old-age assistance program. The National Resources Planning Board said, in 1941, that using the receipt of public assistance as an indicator of whether Social Security beneficiaries had adequate income had several limitations. It stated that some of the states, in 1940, were providing a level of living considerably lower than that provided by Social Security. The board also reported that some states did not have funds to provide for all of their needy applicants and chose not to supplement those who received Social Security benefits. We used administrative data to report the proportion of the elderly who received old-age assistance or SSI benefits. We note that some Social Security beneficiaries who may meet all eligibility criteria may not receive benefits. The chief limitation of replacement rates is that the meaning of a specific value of a replacement rate is not clear. A very low earner could have a high replacement rate and still have very low income, while a high earner could have a low replacement rate and live quite comfortably. Also, the standard that pension professionals consider an adequate replacement rate has changed over the years. Another important limitation arises in trying to define replacement rates for actual households. For example, the actual experience of a given household could easily involve phased-in retirement or situations where one spouse retires while the other continues to work. According to current projections of the Social Security trustees for the next 75 years, revenues will not be adequate to pay full benefits as defined under current law. Therefore, estimating future Social Security benefits should reflect that actuarial deficit and account for the fact that some combination of benefit reductions and revenue increases will be necessary to restore long-term solvency. To illustrate a full range of possible outcomes, we developed benchmark policy scenarios that would achieve 75-year solvency either by only increasing payroll taxes or only reducing benefits. In developing these benchmarks, we identified criteria to use to guide their design and selection. We also identified key parameters that could be used to describe and calibrate the policies to achieve 75-year solvency. We asked SSA’s Office of the Actuary to score the policies and determine the precise parameter values that would achieve 75-year solvency in each case. Once we defined and fully specified our benchmark policies, we used them to estimate the range of potential future benefit levels using two representative sample microsimulation models as well as an SSA benefit calculator for illustrative workers. (See app. IV.) According to our analysis, appropriate benchmark policies should ideally be evaluated against the following criteria: 1. “Distributional neutrality”: the benchmark should reflect current law as closely as possible while still restoring solvency. In particular, it should try to reflect the goals and effects of current law with respect to redistribution of income. However, there are many possible ways to interpret what this means, such as a) producing a distribution of benefit levels with a shape similar to the distribution under current law (as measured by coefficients of variation, skewness, kurtosis, etc.); b) maintaining a proportional level of income transfers in dollars; c) maintaining proportional replacement rates; and d) maintaining proportional rates of return. 2. Demarcating upper and lower bounds within which the effects of alternative proposals would fall. For example, one benchmark would reflect restoring solvency solely by increasing payroll taxes and therefore maximizing benefit levels while another would solely reduce benefits and therefore minimize payroll tax rates. 3. Ability to model: the benchmark should lend itself to being modeled within the GEMINI and MINT models. 4. Plausibility: the benchmark should be politically within reason as an alternative; otherwise, the benchmark could be perceived as a strawman. 5. Transparency: the benchmark should be readily explainable to the reader. We used only one tax-increase-only benchmark policy scenario because policies that only increase payroll tax rates have no effect on benefits. Our tax-increase-only benchmark would raise payroll taxes once and immediately (in the next calendar year) by the amount of the OASDI actuarial deficit as a percent of payroll. It results in the smallest ultimate tax rate of those we considered and spreads the tax burden most evenly across generations; this is the primary basis for our selection. The later that taxes are increased, the higher the ultimate tax rate needed to achieve solvency, and in turn the higher the tax burden on later taxpayers and lower on earlier taxpayers. We consider this policy to be plausible because it would involve less than a 1 percentage point increase on employers and employees each. Still, any policy scenario that achieves 75-years solvency only by increasing revenues would have the same effect on the adequacy of future benefits in that promised benefits would not be reduced. Nevertheless, alternative approaches to increasing revenues could have very different effects on individual equity. We developed three benefit-reduction benchmarks for our analysis. For ease of modeling, all benefit-reduction benchmarks take the form of reductions in the PIA formula factors; they differ in the relative size of those reductions across the three factors, which are 90, 32, and 15 percent under current law. Each benchmark has three dimensions of specification: scope, phase-in period, and the factor changes themselves. For our analysis, we want the benefit reductions in our benchmarks to apply very generally to all types of benefits, including disability and survivors benefits as well as old-age benefits. Our objective is to find policies that achieve solvency while reflecting the distributional effects of the current program as closely as possible. Therefore, it would not be appropriate to reduce some benefits and not others. If disabled and survivor benefits were not reduced at all, reductions in other benefits would be deeper than shown in this analysis. We selected a phase-in period that begins with those reaching age 62 in 2005 and continues for 30 years. We chose this phase-in period to achieve a balance between two competing objectives: 1) minimizing the size of the ultimate benefit reduction and 2) minimizing the size of each year’s incremental reduction to avoid notches and unduly large incremental reductions. Since later birth cohorts are generally agreed to experience lower rates of return on their contributions already under current law, minimizing the size of the ultimate benefit reduction would minimize further reductions in later cohorts’ rates of return. The smaller each year’s reduction, the longer it will take for benefit reductions to achieve solvency and in turn, the deeper the eventual reductions will have to be. However, the smallest possible ultimate reduction would be achieved by reducing benefits immediately for all new retirees by over 10 percent; this would create a huge notch, that is, creating some marked inequities between beneficiaries close in age to each other. Our analysis shows that a 30-year phase-in should produce incremental annual reductions that would be of palatable size and avoid significant notches. Therefore it would be preferable to longer phase-in periods, which would require deeper ultimate reductions. In addition, we feel it is appropriate to delay the first year of the benefit reductions for a few years because those within a few years of retirement would not have adequate time to adjust their retirement planning if the reductions applied immediately. The Maintain Tax Rates (MTR) benchmark in the 1994-96 Advisory Council Report also provided for a similar delay. When workers retire, become disabled, or die, Social Security uses their lifetime earnings records to determine each worker’s Primary Insurance Amount (PIA), on which the initial benefit and auxiliary benefits are based. The PIA is the result of two elements—the Average Indexed Monthly Earnings (AIME) and the benefit formula. The AIME is determined by taking the lifetime earnings earnings record, indexing it, and taking the average. To determine the PIA, the AIME is then applied to a step-like formula, shown here for 2001. PIA = 90% ! (AIME ≤ $561) + 32% ! (AIME > $561 and ≤ $3381) + 15% ! (AIME > $3381) where AIME is the applicable portion of AIME. All three of our benefit-reduction benchmarks are variations of changes in PIA formula factors and all are special cases of the following generalized form, where F represents the 3 PIA formula factors, which are 90, 32, and 15 percent under current law. ! x ! weight) – y ! weightt = the year of the factor, x = constant proportional benefit reduction, y = constant “subtractive” benefit reduction, and weight and weight determine the relative effects of x and y and sum to 1. Our three potential benchmarks can now be described as follows: Proportional Offset: weight = 1 and weight = 0. The value of x is calculated to achieve 75-year solvency, given the chosen phase-in period and scope of reductions. The formula specifies that the proportional reduction is always taken as a proportion of the base year factor value rather than the prior year. This maintains a constant rate of benefit reduction from year to year. In contrast, taking the reduction as a proportion of the prior year’s factor value implies a decelerating of the benefit reduction over time because the prior year’s factor gets smaller with each reduction. To achieve the same level of 75-year solvency, this would require a greater proportional reduction in earlier years because of the smaller reductions in later years. The proportional offset hits lower earners especially hard because the constant x percent of the higher formula factors results in a larger percentage reduction over that segment of the formula, while the higher formula factors apply to the lower earnings segments of the formula. For example, in a year when the cumulative size of the proportional reduction has reached 10 percent, the 90 percent factor would then have been reduced by 9 percentage points, the 32 percent factor by 3.2 percentage points, and the 15 percent factor by 1.5 percentage points. As a result, earnings below the first bendpoint would be replaced at 9 percentage points less than current law, while earnings above the second bendpoint would be replaced at only 1.5 percentage points less than current law. Still, the proportional offset is easily described as a constant percentage reduction of current law benefits for everyone. In the example, beneficiaries of all earnings levels would have their benefits reduced by 10 percent. Progressive Offset: weight = 0 and weight = 1. The value of y is calculated to achieve 75-year solvency, given the chosen phase-in period and scope of reductions. This offset results in equal percentage point reductions in the formula factors, by definition, and subjects earnings across all segments of the PIA formula to the same reduction. Therefore, it avoids hitting lower earners especially hard as the proportional offset does. As it happens, this offset produces exactly the same effect as the offset we used in our 1990 analysis of a partial privatization proposals. Social Security: Analysis of a Proposal to Privatize Trust Fund Reserves. GAO/HRD-91-22, Dec. 12, 1990. not to the PIA. The contributions to a hypothetical account are proportional to earnings. Therefore, a benefit reduction based on such an account would also be proportional to earnings; that is Benefit reduction = y !AIME Therefore, the new PIA would be PIAnew =90% ! AIME + 15% ! AIMEPIAnew =(90% - y) ! AIME + (32% - y) ! AIMEThus, the reduction from a hypothetical account can be translated into a change in the PIA formula factors. Because this offset can be described as subtracting a constant amount from each PIA formula factor, it is reasonably transparent, especially in comparison to describing it as a hypothetical account offset. Limited Proportional Offset: Other analyses have addressed the concern about the effect of the proportional offset on low earners by modifying that offset to apply only to the 32 and 15 percent formula factors. The MTR policy in the 1994 to 1996 Advisory Council Report used this approach, which in turn was based on the Individual Account (IA) proposal in that report. However, the MTR policy also reflected other changes in addition to PIA formula changes. Our recent report on disability and Social Security reform also used this “limited proportional” approach but using PIA formula changes alone to achieve solvency. Advisory Council on Social Security. Report of the 1994-1996 Advisory Council on Social Security, Vols. 1 and 2. Washington, D.C.: Jan. 1997. Social Security Reform: Potential Effects on SSA’s Disability Programs and Beneficiaries (GAO-01-35, Jan. 24, 2001). Using the generalized form above, this can be expressed as weight = 1, weight = 0 ! x ! weight) – y ! weightfor i = {32,15}, where x differs for the first 10 and second 20 years of the phase-in period and is 1 percentage point higher in the second part than in the first. Table 3 summarizes the features of our four benchmarks. For our analysis of future Social Security benefits, we used two alternative policy microsimulation models and illustrative worker analysis. We used the MINT (Modeling Income in the Near Term) model, developed and used by the Social Security Administration’s Office of Policy, and the GEMINI model, developed by the Policy Simulation Group. For both models the developers produced multiple output data sets based on the PIA formula changes specified by the policy benchmarks. See appendix III for more information on the policy benchmark results. sources. For example, assuming no change in consumption during working years, our tax increase benchmark may overestimate total retirement income because no provision is made to decrease income from saved assets that might diminish as higher payroll taxes reduce disposable income before retirement. The MINT model has not been well validated against other micro or macroeconomic projection models. However, SSA analysts note that there are not many models against which to validate MINT. Moreover, they note a panel of demographers, economists, and outside experts oversaw the development of MINT. Additionally, the 1990 to 1993 SIPP data are the most recent available SIPP data for most income sources other than earnings. In short, more recent nonearnings income data would be ideal. Nevertheless, the intention of this report is to present comparisons of distributions between policy benchmarks. Thus, income-related MINT point estimates should not be considered as literally as differences between the policy benchmarks. Methodologically we chose MINT for this report for its capability to project total income and therefore permit analysis of the adequacy of total income; its ability to prospectively assess and model various Social Security programmatic alternatives; its ability to examine a large portion (those age 60 to 89) of the Social Security population at a point in time; its ability to examine various subgroups, notably by race and ethnicity; and its use as a policy tool already employed by SSA. GEMINI is a policy microsimulation model developed by the Policy Simulation Group (PSG). For our report, PSG produced simulated samples, sometimes called synthetic samples, of lifetime histories, including earnings, marriage, disability, death, and Social Security benefits, for the cohorts born in 1935, 1955, 1970, and 1985. Key descriptive statistics for each of the four birth cohorts are identified through a variety of sources. These statistics describe life expectancy, educational attainment, employment patterns, and marital status at age 60. Where possible these targets are set to be consistent with the 2001 Trustees’ Report or generally available methodologies from the SSA’s Office of the Chief Actuary. After the calibration targets are determined, complete life histories for each birth cohort are produced that match the targets. These life histories are produced by the Pension Policy Simulation Model (PENSIM), a complementary PSG model integrated with GEMINI. Once the cohort samples have been generated, each sample is input into GEMINI, a microsimulation model that has the same Social Security benefit calculation capabilities as the microsimulation model of SSASIM, which past GAO reports have used to analyze Social Security reforms. Each sample is run twice through each of the our benchmark policies and produces output files that contain detailed information on each member of the sample, including Social Security benefits for sample individuals and their spouses. Because GEMINI cannot yet stochastically determine the age at which a member of the sample applies for benefits, one output file assumes that the all workers retire at age 62 and the other assumes that they retire at age 65. Table 4 shows results for GEMINI compared to the 1998 Annual Statistical Supplement to the Social Security Bulletin. Average benefits are high by only 0.9 percent for men and high by only 1.6 percent for women. However this comparison may suffer from a selectivity problem caused by the fact that, in the actual data, not everyone eligible to apply for retired worker benefits does so at age 62. If the propensity to retire early at age 62 varies by lifetime earnings level, then the fact that only about sixty percent actually apply at 62 will complicate the comparison with statistics from the GEMINI simulation, which assumes everyone applies at age 62. After adjusting for the selectivity problem we find that benefits are low by 0.4 percent for men and are low by 4.6 percent for women. Methodologically, we chose GEMINI for this report for its ability to examine the effect of Social Security programmatic changes on a cohort population and its ability to project cohorts and examine policy effects well out into the 75 year actuarial period (the year 2050). The MINT takes real people and projects their behavior out into the future while GEMINI develops a synthetic sample and validates it to recent data. While these models were developed separately and take somewhat different modeling approaches, we can see that actual results compare somewhat favorably. Table 5 compares median annual Social Security benefit income for the 1955 cohort by marital status for both models. For married and divorced individuals, the results compare very favorably as the MINT results fall within the same range as the GEMINI results. The results for never married and widowed individuals do not align as nicely, though they are within 8.9 percent and 9.4 percent, respectively, of the lower bound of GEMINI benefits. However, the intent of the report is not to focus on actual values produced by the models, but how values change across benchmark scenarios. For analysis of future replacement rates, we use four illustrative workers. These illustrative workers are constructed according to the methodology employed for steady workers by SSA’s Office of the Chief Actuary. Additionally, our analysis of future steady workers assumes that the average wage increases according to Alternative II assumptions of the 2001 Trustees Report. As defined by SSA’s Office of the Chief Actuary, the steady earnings pattern assumes that the worker is a steady full-time employee with no interruption in employment. The steady worker begins working in covered employment at age 22, and the worker’s earnings increase each year at the same rate as Social Security’s Average Wage Index. For our analysis, workers are continuously employed between the ages of 22 and 62 (i.e., they do not experience a period of disability or die). For the steady earnings pattern, the following four levels of earnings are used: low (annual earnings equal to 45 percent of the average wage), average (annual earnings equal to the average wage), high (annual earnings equal to 160 percent of the average wage), and maximum (annual earnings equal to the OASDI Contribution and Benefit Base). To calculate the worker’s monthly Social Security benefit, we used SSA’s Office of the Chief Actuary’s ANYPIA program. Finally, to calculate replacement rates, we annualized the monthly benefit and divided the result by the worker’s age 64 earnings. In actuality, the year-to-year earnings of most workers do not follow steady earnings patterns. However, illustrative steady workers offer the advantage of showing programmatic variation by utilizing a consistent worker profile. More realistic lifetime earnings profiles would be more significant if timing of payroll contributions are important to the worker, such as a policy of contributing a portion of payroll taxes to individual accounts. The most important metric of adequacy for a life time earner is the workers’ PIA, which can be arrived from any number of different earnings patterns. Examination of actual workers PIAs to the illustrative steady worker types shows that women and men are “best represented”by different worker types. Table 6 shows that in 1999 the low earner “best represents” female workers as 71.7 percent fall closest to that category and the high earner best represents male workers as 41.1 percent fall closest to that category. Percentages indicated above reflect the status of workers retiring in 1999. These percentages would likely be different for workers retiring in earlier or later years. For instance, the increasing employment rates for women over the last several decades is expected to result in relatively greater increases in career-average earnings for women than for men in the future. Thus, the difference in the distributions of male and female retired workers by benefit levels is expected to diminish in the future. In addition to those named above, Ken Stockbridge, Kimberly Granger, Charles Ford, Brendan Cushing-Daniels, Nila Garces-Osorio, Kim Reniero, Daniel Schwimer, and Kathleen Scholl made key contributions to this report.
Before Social Security, being old often meant being poor. Today, dependency on public assistance has dropped to a fraction of its Depression-era levels, and poverty rates among the elderly are now lower than for the population as a whole. At the same time, Social Security has become the single largest source of retirement income for more than 90 percent of persons aged 65 and older. Automatic adjustments were introduced in 1972 to reflect increases in the cost of living. Other program changes gradually increased social security coverage to larger portions of the workforce and extended eligibility to family members and disabled workers. Other benefit programs, such as Supplemental Security Income (SSI), Medicare, and Medicaid, have also been added over the years. With regard to measuring income adequacy, various measures help examine different aspects of this concept, but no single measure can provide a complete picture. For various subgroups of beneficiaries that have lower lifetime earnings, poverty rates have also declined. Although the Social Security benefit formula favors lower lifetime earners, their lower earnings and work histories can leave them with incomes below the poverty level when they retire or become disabled. The outlook for future Social Security benefit levels and income adequacy depend on how the program's long-term financing imbalance is addressed, as well as on the measures used. GAO concludes that reductions in promised benefits and increases in program revenues will be needed to restore the program's long-term solvency and sustainability. Possible benefit changes might include adjustments to the benefit formula or reductions in cost-of-living increases. Possible revenue sources might include higher payroll taxes or transfers from the Treasury's general fund.
The sole purpose of Newark AFB is to house and support the large industrial complex comprising the AGMC. Supporting two Air Force missions—depot maintenance and metrology and calibration—AGMC provides depot level maintenance of inertial guidance and navigation systems and components and displacement gyroscopes for the Minuteman and Peacekeeper intercontinental ballistic missiles and most of the Air Force’s aircraft. In fiscal year 1994, AGMC’s depot maintenance workload consisted of about 900,000 hours; almost 10,500 items were produced to support repair requirements for 66 Air Force, Navy, and Army systems and components. This work was accomplished by about 500 maintenance and engineering personnel and 325 management and support personnel. AGMC is different from the Air Force air logistics centers (ALC) in that it does not have weapon system and item management responsibility collocated at the same base. For Air Force systems repaired at AGMC, weapon system and item management functions are performed primarily at the Ogden or Oklahoma City ALCs. However, some of the engineering support normally provided by the system program management offices at ALCs is performed at AGMC for systems it repairs. In its second Air Force mission, metrology and calibration, AGMC performs overall technical direction and management of the Air Force Metrology and Calibration Program and operates the Air Force Measurement Standards Laboratory. About 200 personnel are involved in the metrology and calibration mission—109 in generating technical orders, certification of calibration equipment, and management operations and 89 in the standards laboratory. As the single manager for the Air Force Metrology and Calibration Program, AGMC provides all metrology engineering services for the Air Force. The standards laboratory complex, consisting of 47 laboratories, serves as the primary laboratory for calibrating and certifying measurement standards used worldwide in all Air Force precision measurement equipment laboratories. In fiscal year 1994, the standards laboratory produced about 11,500 calibrated items. The Department of Defense (DOD) considered AGMC’s work conducive to conversion to the private sector and recommended closing Newark AFB/AGMC through privatization and/or transferring the workload to other depots. DOD justified the closure by (1) identifying at least 8.7 million hours of excess Air Force depot maintenance capacity, with the closure of AGMC expected to reduce this excess by 1.7 million hours, and (2) applying the eight base closure criteria to Air Force bases having depots and ranking Newark AFB low relative to the others (see app. II for base closure criteria). DOD assigned a low military value to Newark AFB primarily because it was a single mission base with no airfield. DOD estimated that implementing its recommendation on Newark AFB/AGMC would cost $31.3 million, result in an annual savings of $3.8 million, and have an 8-year payback period for closure and relocation expenses. In our report on the base closure and realignment recommendations and selection process, we estimated that the Newark AFB/AGMC closure costs would be $38.29 million, with a 13-year payback period. BRAC determined that the AGMC workload could either be contracted out or privatized-in-place at the same location, although the BRAC noted that industry interest in privatization-in-place was limited. The BRAC recommended closing Newark AFB/AGMC—noting that some workload will move to other depot maintenance activities, including the private sector. The President agreed with the overall BRAC recommendations dealing with maintenance depots, including the closure of AGMC. The Congress did not challenge the overall BRAC recommendations. The Air Force has begun the implementation of the closure and privatization of Newark AFB/ AGMC. Implementation of the Newark AFB/AGMC closure through privatization is still in the early phases, with many details yet to be worked out. In general, the Air Force has developed a three-pronged approach to implementing BRAC’s decision. First, four systems, representing about 3 percent of AGMC’s existing depot maintenance workload, will be transferred to other Air Force depots. Second, ownership of the Newark AFB/AGMC property and facilities will be transferred to a local reuse commission. The commission is to lease space to one prime guidance system repair contractor that will provide depot maintenance work, one prime metrology contractor that will perform calibrations and author calibration manuals, and the remaining organic metrology program management contingent. While privatization-in-place is the goal, based on a strategy option announced in the Commerce Business Daily, contractors may elect to move workload to other facilities. Hypothetically, this option could result in all workload moving to other contractor locations—should the winning contractor(s) demonstrate that moving workload to other locations would provide the best value to the government. Third, the metrology and calibration mission will be continued at AGMC, with some functions privatized and another continued as an Air Force activity reporting to AFMC Headquarters or one of the ALCs. The Air Force originally planned to privatize all activities related to the metrology and calibration mission, but it later determined that the Air Force Metrology and Calibration Program’s materiel group manager function could not be privatized because it is a function considered to be “inherently governmental.” In performing this function, AGMC civilian and military employees provide policy and direction for all precision measurement equipment laboratories Air Force wide, inspect these laboratories for compliance with required policies and procedures, and procure calibration standards used in calibration laboratories. Current plans for the metrology and calibration program provide for (1) retaining about 130 government employees to provide the metrology and calibration management function—with the Air Force leasing space at AGMC from the local reuse commission and (2) contracting out the primary standards laboratory and technical order preparation, which will also remain at AGMC, with the contractor leasing space from the reuse commission. The Air Force plans to retain ownership of mission-related maintenance and metrology and calibration equipment, which will be provided to the winning contractor(s) as government-furnished equipment. AGMC accountable records indicate the value of the depot maintenance equipment is $297.5 million and the value of the metrology and calibration equipment $28.5 million. Details such as the cost of the lease arrangement, allocation of utility and support costs between the Air Force and contractor(s), and the determination of whether the government or the contractor will be responsible for maintaining the equipment are not yet known. To manage the AGMC privatization, the Air Force established a program management office at Hill AFB. This office is responsible for developing the statement of work, request for proposal, acquisition plan, source selection plan, and related documents. The award is scheduled for September 29, 1995. Several key milestones leading up to contract award have slipped, compressing the schedule for the remaining tasks in the pre-contract-award period. Air Force officials describe this schedule as optimistic. After contract award, the Air Force plans to initiate a phased process for transitioning individual maintenance workloads to the contractor. Air Force officials stated that this 12-month transition period reduces the risk of interrupting ongoing operations and allows the contractor(s) an opportunity to build up an infrastructure and trained workforce. However, according to the program management office, a “turn-key” transition where the contractor becomes fully responsible for the AGMC workload at one point in time is the preferred strategy of the ALC system managers and may be adopted. Our work has identified several concerns regarding the cost, savings, and payback period for the Air Force’s implementation of the AGMC BRAC decision. These include concerns that (1) the projected cost of closing AGMC has doubled and may increase further; (2) the $3.8 million annual savings projected to result from AGMC’s closure is not likely to be realized because of potentially higher costs for contract administration, contractor profit, possible recurring proprietary data costs, and other factors that have not been considered in the cost computation; and (3) the payback period could be extended to over 100 years or never, depending upon the Air Force’s ability to contain one-time closure costs and recurring costs of performing the AGMC mission after privatization. Recognizing that projected closure costs have increased, in August 1994, the Air Force base closure group validated a Newark AFB/AGMC closure budget of $62.2 million. This amount is $30.9 million more than the original projection of $31.3 million. Almost all of the increase is attributable to the estimated $30.5 million transition cost to convert from Air Force to contractor operation. According to Air Force officials, the original cost estimate only included costs associated with transferring and separating personnel under the base closure process and for transferring a limited amount of workload to other Air Force depots. They noted that DOD has no prior experience with privatizing a large, complex depot maintenance facility. Additionally, since the development of the closure and privatization option for AGMC was done quickly, the time available to identify all the factors and costs associated with this option at the time of the 1993 BRAC was limited. We recomputed the payback period using DOD’s 1993 Cost of Base Realignment Actions (COBRA) model. We used the estimated nonrecurring costs validated by the Air Force in August 1994 (adjusted for inflation) and assumed that post-closure operations would result in $3.8 million annual savings as DOD originally projected in 1993. The model indicated that, with these costs and assumptions, the payback period would be over 100 years rather than 8 years as originally projected by DOD. However, DOD approved discount rate used in the COBRA model has been reduced from 7 percent in the 1993 BRAC process to 2.75 percent in 1995. Consequently, we adjusted the COBRA model to the revised discount factor—holding all other variables constant—and found the revised payback period to be 17 years. Achieving a 17-year payback is dependent on no further increase in one-time closure costs and achieving the $3.8 million annual post-closure operational cost savings originally projected by DOD. Our work has determined that neither of these assumptions is likely because of significant cost uncertainties. While the Air Force has recognized that an estimated $62.2 million will be required as BRAC funded costs of closure, it also recognizes there will be additional one-time closure costs not funded by BRAC. For example, an estimated $4.86 million will be needed to cover costs such as interim health benefits for personnel separating from government employment. Also, there will be environmental cleanup costs of some undetermined amount. Thus far, $3.62 million has been identified for environmental cleanup. As already indicated, we have also identified other potential closure costs that the Air Force has not included. One is the cost to acquire the right to provide data some equipment manufacturers consider proprietary to contractors expecting to bid on the AGMC maintenance workload. Proprietary rights involve the claim of ownership by equipment manufacturers of some unique information, such as technical data, drawings, and repair processes, to protect the manufacturer’s market position by prohibiting disclosure outside the government. An Air Force official said cost estimates were submitted by four equipment manufacturers claiming proprietary rights, and these estimates were “absurdly high.” While we cannot identify what these additional one-time costs will be, any unidentified costs push the payback period even further. At the time AGMC was identified for closure and privatization, DOD estimated $68.09 million annual cost for contractor operations and $71.84 million in net annual savings in personnel and overhead costs—resulting in an estimated annual savings of $3.8 million. Recurring costs after AGMC closure and privatization probably cannot be determined with any degree of assurance until after contract negotiation and award. However, some Air Force officials have estimated that rather than achieving savings, annual recurring costs could actually exceed current costs of operations. For example, an Air Force Materiel Command (AFMC) memorandum noted that prevailing labor rates and private sector charges for similar items suggest that it will be difficult to keep the annual contract value the same as the current annual civilian salary—a key assumption in achieving the originally projected $3.8 million annual savings. An AFMC analysis determined that, assuming these costs are comparable, additional costs for profit and contract administration could result in post-closure operation costs exceeding the current operation costs by at least $1.8 million. Additional costs for proprietary data and taxes could increase the post-closure operation costs by $3.8 million annually. A November 1994 AFMC memorandum informed system managers of increased funding requirements for AGMC workloads to cover anticipated increases in costs of operation under privatization-in-place. A December 1994 meeting of the Acquisition Strategy Panel confirmed the projected increases. For example, the projected fiscal year 1997 costs after privatization-in-place were about 107 percent higher than projected costs under government operation. Additionally, the projected costs of contractor operations for the 5-year period between fiscal years 1996 and 2000 were estimated to be over $456 million more than previously estimated costs of government operations over that period. Other privatization issues relate to (1) proprietary data claims, (2) the effect of the closure on excess depot maintenance capacity, (3) the impact of privatizing core workload, (4) the segmentation of the metrology and calibration mission, and (5) the transfer of AGMC property and facilities to the local reuse commission. The proprietary rights to technical data is unresolved for some workloads to be contracted out and could greatly increase the costs of privatization. In this case, when contractors have a legitimate claim of ownership, the government cannot make this information available to other private sector firms that compete for the AGMC maintenance workload. The amount of depot maintenance workload at AGMC that involves proprietary data, the extent to which owners of proprietary rights are willing to sell these rights to the government, or the potential cost of this acquisition have not been determined. Air Force officials noted they are investigating possible methods for the prospective bidders to gain the necessary data rights as part of their proposal. However, proprietary data problems have already contributed to the delay of several key program milestones, including preparation of the statement of work and acquisition and source selection plans, and are a potential barrier to the AGMC privatization. The privatization of AGMC will not reduce excess capacity by the 1.7 million hours previously estimated if privatization-in-place is completed as currently planned. Since many of the systems and components currently repaired at AGMC are not repaired elsewhere, the AGMC depot maintenance capability does not generally duplicate repair capability found elsewhere. Where duplicate capability exists, consolidating like repair workloads and eliminating redundancies would be expected to generate economies and efficiencies. Currently, it is planned that almost all the AGMC capability will be retained in place for use by private contractors. The Air Force will retain ownership of depot plant equipment and the standards laboratory equipment, which AGMC accountable records indicate are valued at about $326 million. With this arrangement, it is difficult to understand how DOD projects the elimination of 1.7 million hours of excess capacity. All of AGMC’s maintenance workload has been identified as core work to be retained in government facilities. Since 1993, when the Air Force recommended that AGMC be closed and privatized, each of the services identified depot maintenance capability for which it was considered essential that this capability be retained as organic DOD capability—referred to as core capability. According to Office of the Secretary of Defense guidance, core exists to minimize operational risks and to guarantee required readiness for critical weapon systems. The Air Force determined that 100 percent of the AGMC depot maintenance workload is core. AGMC is the only Air Force depot activity having all its repair workload defined as core—with other depots’ core capability ranging from 59 percent at Sacramento ALC to 84 percent at Warner Robins ALC. An AFMC memorandum noted some inconsistency in planning to contract out workload defined as 100 percent core, while continuing to support the need for retaining core capability in DOD facilities. However, the memorandum noted that the inherent risk of contracting out can be minimized if the workload is retained at AGMC as a result of privatization-in-place. Air Force officials stated that retaining government ownership of the mission-related equipment at AGMC is essential to controlling the risk of privatizing this critical core workload. The current plan to retain part of the metrology and calibration mission to be performed by Air Force personnel while privatizing the standards laboratory function may be neither practicable nor cost-effective. We found that the standards laboratory function is generally the training ground where Air Force civilian personnel develop the skills they need to perform the other metrology and calibration functions that will be continued at AGMC as a government operation. We discussed this issue with personnel from both the Army and the Navy who maintain similar organic capabilities to support service metrology and calibration management functions. They noted that from their perspective, contracting part of this work while maintaining most of it as a government activity would not be desirable. Navy officials noted that 100 percent of their metrology and calibration program management personnel were formerly employed in the primary standards laboratory. Army and Navy officials stated that the experience and training gained from their prior work in laboratories was essential to performance of program management responsibilities. We questioned the viability of having the Air Force interservice its metrology and calibration activities to the Army and/or the Navy, which have similar activities. Army and Navy officials said they believe it would be possible to combine the Air Force metrology and calibration function with that of one or both of the other services. Air Force officials said they considered interservicing but determined that neither the Army nor the Navy facilities meet the tolerances required for calibrating some Air Force equipment or have the capacity to assume the Air Force workload. Army and Navy officials stated that an existing memorandum of agreement among the three military departments provides that if one of the primary standards laboratories loses its capability, the remaining laboratories would assist in meeting calibration requirements. These officials said they believe that interservicing or joint operations should be further considered by the Air Force. The AGMC privatization-in-place approach is based on transferring ownership of the Newark AFB/AGMC property and facilities, which the Air Force estimates to be worth about $331 million, to the local reuse commission. To make this approach work, the Air Force must transfer ownership of the property and facilities at no cost or less than fair market value. Whether this transfer will take place is unclear since (1) the fair market value has not been determined and (2) agreements as to the cost of the property or means of payment and as to whether the reuse commission is willing to assume responsibility for operating the property and facilities have not been reached. To effect property transfer at below estimated fair market value, the Secretary of the Air Force must explain the cost and approve the transfer. Air Force officials noted that, pending results of the environmental impact analysis, they expect to convey the property through an economic development conveyance with very favorable terms to the local reuse commission. A local reuse commission official told us that until recently the commission believed the Newark AFB/AGMC property would be transferred to the commission at no cost. The official noted that it is questionable whether the commission will be interested in acquiring the property under other conditions. DOD historically has encountered difficulties in trying to close military bases. This makes us reluctant—absent very compelling reasons—to recommend that DOD revisit prior BRAC decisions. However, we believe that the problems being faced in implementing this decision are of such an unusual nature to warrant revisiting the planned closure and privatization of AGMC. Therefore, we recommend that the Secretaries of the Air Force and Defense reevaluate, as a part of the ongoing BRAC 1995 process, both DOD’s 1993 recommendation to close Newark AFB/AGMC and the Air Force’s approach to implementing the closure decision through privatization-in-place. Part of the work on this assignment resulted from our ongoing effort to review various depot maintenance issues, including an analysis of the status of DOD’s efforts to implement depot closures resulting from prior BRAC decisions. We completed work for this report in December 1994. Our work was performed in accordance with generally accepted government auditing standards. We discussed a draft of this report with agency officials and have included their comments where appropriate. Our scope and methodology are discussed in greater detail in appendix I. We are sending copies of this report to the Director, Office of Management and Budget; the Secretaries of Defense and the Air Force; and other interested parties. We will make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report were Julia Denman, Assistant Director and Project Director, and Frank Lawson, Deputy Project Director. You asked us to review how the Department of Defense (DOD) is managing various issues related to the closure of depot maintenance activities, including (1) the allocation of workload that is currently being performed at these activities, either to DOD activities or to the commercial sector; (2) policies and procedures for the disposition of equipment at these activities; (3) policies and procedures to provide the existing workforce opportunities for employment; (4) the potential for conversion of these activities into commercial repair activities; and (5) an update of DOD’s estimates for closure costs and savings as a result of implementing prior Defense Base Closure and Realignment Commission (BRAC) decisions for depot closures. We discussed the Newark Air Force Base closure and privatization of the Aerospace Guidance and Metrology Center (AGMC) with Air Force officials responsible for implementing the BRAC decision at AGMC, Air Force Materiel Command (AFMC), and Air Force headquarters. We also (1) discussed estimated closure costs and savings with Air Force officials at various locations and (2) toured the AGMC facility, conducting interviews with center personnel and reviewing historical and evolving documentation. In addition, we contacted Defense Contract Management Command, Defense Contract Audit Agency, and AFMC contracting personnel for contract-related information and Army and Navy metrology officials responsible for the primary standards laboratories to obtain information on their capability to maintain the AGMC metrology workload and their views on privatizing part of the metrology functions while continuing to keep the management function as a government operation. We analyzed laws, policies, and regulations governing core capability and Office of Management and Budget Circular A-76 and Policy Letter 92-1 for information on inherently governmental functions. To assess the impact of the increase in the estimated cost of closing Newark AFB/AGMC, we used the 1993 Cost of Base Realignment Actions model to calculate the closure and relocation cost payback period. In conducting this review, we used the same reports and statistics the Air Force uses to monitor the cost of closure and estimate the recurring costs associated with AGMC privatization. We did not independently determine their reliability. The current and future mission requirements and the impact of operational readiness of DOD’s total force. The availability and condition of land, facilities, and associated airspace at both the existing and potential receiving locations. The ability to accommodate contingency, mobilization, and future total force requirements at both the existing and potential receiving locations. The cost and manpower implications. The extent and timing of potential costs and savings, including the number of years, beginning with the date of completion of the closure or realignment. The economic impact on communities. The ability of both the existing and potential receiving communities’ infrastructure to support forces, missions and personnel. The environmental impact. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the cost and savings issues related to the closure and privatization of the Newark Air Force Base Aerospace Guidance and Metrology Center (AGMC). GAO found that: (1) the justification for closing AGMC is not clear; (2) the Department of Defense (DOD) considers AGMC work conducive to conversion to the private sector and has recommended closing AGMC through privatization and transferring its workload to other depots; (3) DOD estimates that closing AGMC would cost $31.3 million and would result in annual savings of $3.8 million; (4) one-time closure costs have doubled in the past year and may still be underestimated; (5) the projected costs of conducting post-privatization operations could exceed the cost of current Air Force operations and reduce or eliminate projected savings; and (6) other closure and privatization issues create uncertainty about the validity of the Air Force's planned action, including the disposition of proprietary data claims, the effect of the closure on excess depot maintenance capacity, the segmentation of the metrology and calibration mission, and the transfer of AGMC property to a local reuse commission.
Managed by DHS’s Customs and Border Protection (CBP), SBInet is intended to strengthen CBP’s ability to detect, identify, classify, track, and respond to illegal breaches at and between ports of entry. The SBI Program Executive Office, which is organizationally within CBP, is responsible for managing key acquisition functions associated with SBInet, such as requirements management and risk management. Within the Executive Office, the SBInet System Program Office (SPO) is responsible for managing the day-to-day development and deployment of SBInet. In September 2006, CBP awarded a 3-year contract to the Boeing Company for SBI, with three additional 1-year options. As the prime contractor, Boeing is responsible for designing, producing, testing, deploying, and sustaining the system. In September 2009, CBP extended its contract with Boeing for the first option year. CBP is acquiring SBInet incrementally in a series of discrete units of capabilities, referred to as “blocks.” Each block is to deliver one or more system capabilities from a subset of the total system requirements. The first block, known as Block 1, is to include a mix of surveillance technologies (e.g., cameras, radars, and sensors) and C3I technologies that are to produce a common operating picture—a uniform presentation of activities within specific areas along the border. Block 1 is to be initially deployed within the Tucson Sector to the Tucson Border Patrol Station (TUS-1) and to the Ajo Border Patrol Station (AJO-1). As of May 2010, the TUS-1 system is scheduled for government acceptance in September 2010, with AJO-1 acceptance in November 2010. In January 2010, the DHS Secretary ordered a departmentwide reassessment of the program to include a comprehensive assessment of alternatives to SBInet to ensure that the department utilizes the most efficient and effective technological and operational solutions to secure the border. Pending the results of the assessment, the Secretary also froze all Block 1 expenditures beyond those needed to complete the implementation of the initial SBInet deployments to TUS-1 and AJO-1. Further, in March 2010, the department announced its plans to redeploy $50 million from its American Recovery and Reinvestment Act of 2009 funding to purchase currently available, stand-alone technology, such as remote-controlled camera systems called Remote Video Surveillance Systems, and truck-mounted systems with cameras and radar, called Mobile Surveillance Systems, to meet near-term operational needs. In order to measure system acquisition progress and promote accountability for results, organizations need to establish clear commitments around what system capabilities will be delivered, and when and where they will be delivered. In September 2008, we reported that the scope of SBInet was becoming more limited without becoming more specific, thus making it unclear and uncertain what system capabilities would be delivered when and to what locations. Accordingly, we recommended that DHS establish and baseline the specific program commitments, including the specific system functional and performance capabilities that are to be deployed to the Tucson, Yuma, and El Paso Sectors, and establish when these capabilities are to be deployed and are to be operational. To its credit, the SPO subsequently defined the scope of the first incremental block of SBInet capabilities that it intended to deploy and make operational; however, these capabilities and the number of geographic locations to which they are to be deployed have continued to shrink. For example, the number of component-level requirements to be deployed to the TUS-1 and AJO-1 locations has decreased by about 32 percent since October 2008 (see fig. 1). In addition, the number of sectors that the system is to be deployed to was reduced from three border sectors spanning about 655 miles to two sectors spanning about 387 miles. Further, the stringency of the performance measures was relaxed, to the point that system performance is now deemed acceptable if it identifies less than 50 percent of items of interest that cross the border. According to program officials, the decreases are due to poorly defined requirements and limitations in the capabilities of commercially available system components. The result will be a deployed and operational system that does not live up to user expectations and provides less mission support than was envisioned. The success of a large-scale system acquisition program, like SBInet, depends in part on having a reliable schedule of when the program’s set of work activities and milestone events will occur, how long they will take, and how they are related to one another. Among other things, a reliable schedule provides a road map for systematic execution of a program and the means by which to gauge progress, identify and address potential problems, and promote accountability. In September 2008, we reported that the program did not have an approved master schedule that could be used to guide the development of SBInet. Accordingly, we recommended that the SPO finalize and approve an integrated master schedule that reflects the timing and sequencing of SBInet tasks. However, DHS has yet to develop a reliable integrated master schedule for delivering the first block of SBInet. Specifically, the August 2009 SBInet integrated master schedule, which was the most current version available at the time of our review, did not sufficiently comply with seven of nine schedule estimating practices that relevant guidance states are important to having a reliable schedule. For example, the schedule did not adequately capture all necessary activities to be performed, including those to be performed by the government, such as obtaining environmental permits in order to construct towers. Further, the schedule did not include a valid critical path, which represents the chain of dependent activities with the longest total duration in the schedule, and it does not reflect a schedule risk analysis, which would allow the program to better understand the schedule’s vulnerability to slippages in the completion of tasks. These limitations are due, in part, to the program’s use of the prime contractor to develop and maintain the integrated master schedule, whose processes and tools do not allow it to include in the schedule work that it does not have under contract to perform, as well as the constantly changing nature of the work to be performed. Without having a reliable schedule, it is unclear when the first block will be completed, and schedule delays are likely to continue. The decision to invest in any system, or major system increment, should be based on reliable estimates of costs and meaningful forecasts of quantifiable and qualitative benefits over the system’s useful life. However, DHS has not demonstrated the cost-effectiveness of Block 1. In particular, it has not reliably estimated the costs of this block over its entire life cycle. To do so requires DHS to ensure that the estimate meets key practices that relevant guidance states are important to having an estimate that is comprehensive, well-documented, accurate, and credible. However, DHS’s cost estimate for Block 1, which is about $1.3 billion, does not sufficiently possess any of these characteristics. Further, DHS has yet to identify expected quantifiable or qualitative benefits from this block and analyze them relative to costs. According to program officials, it is premature to project such benefits given the uncertainties surrounding the role that Block 1 will ultimately play in overall border control operations, and that operational experience with Block 1 is first needed in order to estimate such benefits. While we recognize the value of operationally evaluating an early, prototypical version of a system in order to better inform investment decisions, we question the basis for spending in excess of a billion dollars to gain this operational experience. Without a meaningful understanding of SBInet costs and benefits, DHS lacks an adequate basis for knowing whether the initial system solution is cost-effective. Successful management of large information technology programs, like SBInet, depends in large part on having clearly defined and consistently applied life cycle management processes. In September 2008, we reported that the SBInet life cycle management approach had not been clearly defined. Accordingly, we recommended that the SPO revise, approve, and implement its life cycle management approach, including implementing key requirements development and management practices, to reflect relevant federal guidance and leading practices. To the SPO’s credit, it has defined key life cycle management processes that are largely consistent with relevant guidance and associated best practices. However, it has not effectively implemented these processes. In particular: The SPO revised its Systems Engineering Plan, which documents its life cycle management approach for SBInet definition, development, testing, deployment, and sustainment, in November 2008, and this plan is largely consistent with DHS and other relevant guidance. For example, it defines a number of key life cycle milestone or “gate” reviews that are important in managing the program, such as initial planning reviews, requirements reviews, system design reviews, and test reviews. The plan also requires most key artifacts and program documents that DHS guidance identified as important to each gate review, such as a risk management plan and requirements documentation. However, the SPO has not consistently implemented these life cycle management activities for Block 1. For example, the SPO did not review or consider key artifacts, including plans for testing and evaluating the performance of the system, as well as assessing the robustness of the system’s security capabilities, during its Critical Design Review, which is the point when, according to the plan, verification and testing plans are to be in place. The SBInet Requirements Development and Management Plan states that (1) a baseline set of requirements should be established by the time of the Critical Design Review; (2) requirements should be achievable, verifiable, unambiguous, and complete; and (3) requirements should be bi- directionally traceable from high-level operational requirements through detailed low-level requirements to test plans. Further, the plan states that ensuring traceability of requirements from lower-level requirements to higher-level requirements is an integral part of ensuring that testing is properly planned and conducted. However, not all Block 1 component requirements were sufficiently defined at the time that they were baselined at the Critical Design Review. Further, operational requirements continue to be unclear and unverifiable, which has contributed to testing challenges, including the need to extemporaneously rewrite test cases during test execution. In addition, while requirements are now largely traceable backwards to operational requirements and forward to design requirements and verification methods, this traceability has not been used until recently to verify that higher-level requirements have been satisfied. In 2008, the SPO documented a risk management approach that largely complies with relevant guidance. However, it has not effectively implemented this approach for all risks. Moreover, available documentation does not demonstrate that significant risks were disclosed to DHS and congressional decision makers in a timely fashion as we previously recommended, and, while risk disclosure to DHS leadership has recently improved, not all risks have been formally captured and thus shared. For example, some of the risks that have not been formally captured include the lack of well-defined acquisition management processes, staff with the appropriate acquisition expertise, and agreement on key system performance parameters. However, the SPO recently established a risk management process for capturing SBI enterprisewide risks, including the lack of well-defined acquisition management processes and staff expertise. Reasons cited by program officials for not implementing these processes include their decision to rely on task order requirements that were developed prior to the Systems Engineering Plan and competing SPO priorities, including meeting an aggressive deployment schedule. Until the SPO consistently implements these processes, it will remain challenged in its ability to successfully deliver SBInet. To address the program’s risks, uncertainties, and acquisition management weaknesses, our report being released today provides 12 recommendations. In summary, we recommended that DHS limit future investment in SBInet to work that is either already under contract and supports the completion of Block 1 activities for deployment to TUS-1 and AJO-1 and/or provides a basis for a departmental decision on what, if any, expanded investment in SBInet is justifiable as a prudent use of DHS’s resources for carrying out its border security and immigration management mission. As part of this recommendation, we reiterated prior recommendations pertaining to program management challenges and recommended that DHS address weaknesses identified in our report by, for example, ensuring that the SBInet integrated master schedule, Block 1 requirements, and the Systems Engineering Plan, among other program elements, are consistent with best practices. We also recommended that the program undertake a detailed cost-benefit analysis of any incremental block of SBInet capabilities beyond Block 1 and report the results of such analyses to CBP and DHS leadership. Further, we recommended that DHS decide whether proceeding with expanded investment in SBInet represents a prudent use of the department’s resources, and report the decision, and the basis for it, to the department’s authorization and appropriations committees. To DHS’s credit, it has initiated actions to address our recommendations. In particular, and as previously mentioned, the department froze all funding beyond the initial TUS-1 and AJO-1 deployments until it completes a comprehensive reassessment of the program that includes an analysis of the cost and projected benefits of additional SBInet deployments, as well as the cost and mission effectiveness of alternative technologies. Further, in written comments on a draft of our report, DHS described steps it is taking to fully incorporate best practices into its management of the program. For example, DHS stated that, in response to our previous recommendations, it has instituted more rigorous oversight of SBInet, requiring the program to report to the department’s Acquisition Review Board at specified milestones and receive approval before proceeding with the next deployment increment. With respect to our new recommendations, DHS stated that it is, among other things, taking steps to bring the Block 1 schedule into alignment with best practices, verifying requirements and validating performance parameters, updating its Systems Engineering Plan, and improving its risk management process. In closing, let me emphasize our long held position that SBInet is a risky program. To minimize the program’s exposure to risk, it is imperative for DHS to follow through on its stated commitment to ensure that SBInet, as proposed, is the right course of action for meeting its stated border security and immigration management goals and outcomes, and once this is established, for it to ensure that the program is executed in accordance with proven acquisition management best practices. To do less will perpetuate a program that has for too long been oversold and under delivered. This concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittees may have. For questions about this statement, please contact Randolph C. Hite at (202) 512-3439 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony include Deborah Davis, Assistant Director; David Alexander; Rebecca Alvarez; Carl Barden; Sylvia Bascopé; Tisha Derricotte; Neil Doherty; Nancy Glover; Dan Gordon; Cheryl Dottermusch; Thomas J. Johnson; Kaelin P. Kuhn; Jason T. Lee; Jeremy Manion; Taylor Matheson; Lee McCracken; Jamelyn Payan; Karen Richey; Karl W.D. Seifert; Matt Snyder; Sushmita Srikanth; Jennifer Stavros-Turner; Stacey L. Steele; Karen Talley; and Juan Tapia-Videla. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Secure Border Initiative (SBI) is intended to help secure the 6,000 miles of international borders that the contiguous United States shares with Canada and Mexico. The program, which began in November 2005, seeks to enhance border security and reduce illegal immigration by improving surveillance technologies, raising staffing levels, increasing domestic enforcement of immigration laws, and improving physical infrastructure along the nation's borders. Within SBI, the Secure Border Initiative Network (SBInet) is a multibillion dollar program that includes the acquisition, development, integration, deployment, and operation of surveillance technologies--such as unattended ground sensors and radar and cameras mounted on fixed and mobile towers--to create a "virtual border fence." In addition, command, control, communications, and intelligence (C3I) software and hardware are to use the information gathered by the surveillance technologies to create a real-time picture of what is transpiring within specific areas along the border and transmit the information to command centers and vehicles. The testimony summarizes our most recent report on SBInet, which provided a timely and compelling case for DHS to rethink the plans it had in place at the beginning of this year for investing in SBInet. In this regard, we showed that the scope of the initial system's capabilities and areas of deployment have continued to shrink, thus making it unclear what capabilities are to be delivered when. Moreover, DHS had yet to demonstrate the cost-effectiveness of the proposed SBInet solution, and thus whether the considerable time and money being invested represented a prudent use of limited resources. Further, DHS had not employed the kind of acquisition management rigor and discipline needed to reasonably ensure that the proposed system capabilities would be delivered on time and within budget. Collectively, we concluded that these limitations increased the risk that the proposed solution would not meet the department's stated border security and immigration management goals. To minimize the program's exposure to risk, we recommended that DHS determine whether its proposed SBInet solution satisfied the department's border security needs in the most cost-effective manner and that the department improve several key life cycle management areas. DHS largely agreed with our recommendations. More importantly, since receiving these recommendations in a draft of our report in March 2010, the Secretary of Homeland Security has taken action to limit the department's near-term investment in SBInet pending its completion of an analysis of alternative investment options. This and other planned actions are consistent with the intent of our recommendations.
SSA administers three major benefit programs: Old-Age and Survivors Insurance (OASI), which provides benefits to retired workers and their families and to families of deceased workers; (2) Disability Insurance (DI), which provides benefits to eligible workers with disabilities and their family members; and (3) Supplemental Security Income (SSI), which provides income for aged, blind, or disabled individuals with limited income and resources. In addition to paying benefits through these three programs, SSA also issues Social Security cards, maintains earnings records, and performs various other functions through a network of field, state and headquarter offices. SSA’s field offices are the agency’s primary points for providing face-to- face service to the public. In addition to processing new disability and retirement claims, field offices manage other workloads related to program integrity, such as determining whether certain individuals with disabilities remain eligible to receive disability payments based on program criteria. Besides field offices, SSA operates Social Security Card Centers, which issue Social Security numbers; Teleservice Centers, which offer services nationally via a toll-free telephone number; and Program Service Centers, which maintain earnings records, in addition to other functions. In 2008, SSA’s administrative budget for managing its operations was $11.1 billion. The process for deciding who is eligible for SSA disability benefits is complex, consuming a large portion of SSA’s administrative budget. Several state and federal offices, and several adjudication levels are involved in determining whether a claimant is eligible for benefits. The process begins when an individual files an application for disability benefits at an SSA field office, online or over SSA’s toll-free number. In each case, an SSA representative determines whether a claimant meets the non-medical eligibility criteria of each program, such as ensuring that an SSI applicant meets income requirements, or determining if a DI applicant has a sufficient number of work credits. If applicants meet the non- medical eligibility criteria, field office personnel will help claimants complete their applications and obtain claimants’ detailed medical, education, and work histories. The completeness of the information gathered at this time can affect the accuracy and speed of the decision. After the field office determines that an applicant has met SSA’s non- medical eligibility requirements for disability benefits, up to four adjudicative levels may review the applicant’s claim for eligibility generally based on medical criteria. The first adjudicative level is the state Disability Determination Services (DDS), where a disability examiner, working with medical staff, must make every reasonable effort to help the claimant get medical reports from physicians, hospitals, clinics, or other institutions where the claimant has received past medical treatments. After assembling all medical and vocational information for the claim, the DDS examiner in consultation with appropriate medical staff determines whether the claimant meets the requirements of the law for having a disability. In doing so, the DDS examiner uses a five-step, sequential evaluation process that includes a review of the claimant’s current work activity, severity of impairment, and vocational factors. See figure 1. Claimants who are dissatisfied with the initial DDS determination have up to three additional levels of adjudicative appeal. The claimant may request a “reconsideration” of the claim, which is conducted by DDS personnel who were not involved in the original decision. If the reconsideration team concurs with the initial denial of benefits, the claimant then has 60 days from the time of this decision to appeal and request a hearing before an administrative law judge (ALJ). ALJs, who are based in 140 hearing offices located throughout the nation, can consider new evidence and request additional information including medical evidence or medical and vocational expert testimony. A claimant who is dissatisfied with the hearings decision may request, within 60 days of the ALJ’s decision, that the Appeals Council review the claim. The Appeals Council is SSA’s fourth and final adjudicative appeals level and is comprised of administrative appeals judges. The Appeals Council may uphold, modify, or reverse the ALJ’s action, or it may return the claim back to the ALJ for another hearing and issuance of a new decision. The decision of the Appeals Council is the Commissioner’s final decision. To appeal this decision, the claimant must file an action in Federal Court. SSA measures its performance in managing its workloads in various ways. For its disability claims process, at each level of the claims process SSA tracks the number of claims pending a decision each year and the time it takes to issue a decision. The agency also uses a relative measure to determine the backlog by considering how many cases should optimally be pending at year-end. This relative measure is referred to as “target pending” and is set for each level of the disability process with the exception of the reconsideration level. From 1999 to 2006, SSA’s target pending was 400,000 for claims at the initial stage and 300,000 and 40,000 for the hearings and Appeals Council stages, respectively. The number of pending claims that exceed these numbers represents the backlog. With respect to service delivery, SSA uses various measures of performance, including work productivity (average work units performed per year, per employee), customer wait times at field offices, and overall customer satisfaction with service delivery. SSA has experienced increased backlogs and processing times associated with disability claims in recent years, as well as declines in measures of field office service. These trends are likely due to rising workloads and staffing shortfalls. The total number of backlogged disability claims in SSA more than doubled over the last decade, with the greatest accumulation of claims occurring at the hearing level. By the close of fiscal year 2006, the total number of backlogged disability claims, by SSA’s measure, reached 576,000, which represented an overall growth rate of more than 120 percent from fiscal year 1997. As shown in figure 2, backlogs of varying degree have occurred at all stages of the claims process where backlogs are calculated. However, since fiscal year 2001, these claims were concentrated most heavily at the hearings level and, to a lesser extent, at the initial processing level within the DDS offices. The hearings level accounted for the largest share of backlogged claims for 7 of the 10 years we reviewed. In fiscal years 2000 and 2001, the DDS level accounted for the largest share of the backlog. The Appeals Council had the largest backlog in fiscal year 1999, but dramatically reduced these numbers by 2006. In concert with changes in the total claims backlog, average processing times for disability claims at most adjudicative levels increased. As shown in figure 3, although processing times decreased dramatically at the Appeals Council level, they increased markedly at the hearings level, and somewhat at the initial and reconsideration levels. For example, from 1997 to 2006, processing times increased about 20 days at the DDS level and 95 days at the hearings level. Further, in fiscal year 2006, 39 percent of all hearing decisions took between 365 to 599 days to process; 28 percent took 600 to 999 days to process; and 2 percent took over 1,000 days. For two regions (region 5 in Chicago and region 10 in Seattle), nearly half of all hearing decisions made in fiscal year 2006 took longer than 600 days to complete. One contributor to increased disability claims backlogs has been spikes in new applications. For example, the number of initial applications for DI and SSI benefits increased by 21 percent overall from fiscal years 1997 to 2006, contributing to the claims backlog and adding additional pressures to field office personnel who initially review these claims. These increases can be attributed to a number of influences: periodic downturns in the economy, the aging of the baby boom population, increased referrals from other programs, previous changes in program eligibility requirements and regulations, and increased program outreach. Officials in one region recounted one initiative that targeted outreach to the homeless, which increased applications and also added to processing times. They also attributed some processing delays to the time required to track homeless candidates and help them document their disabilities. With respect to the economy, SSA officials, DDS senior managers, and our prior work all attest to the fact that economic downturns from a failing industry or natural disaster can precipitate new disability applications. The growth in the disability claims backlogs has also coincided with losses in key personnel associated with the disability claims process. For example, although DDS staff increased about 4 percent from 1997 to 2006, DDSs have experienced high rates of staff turnover and attrition. Attrition rates for DDS disability examiners, who are state employees, were almost double that of SSA federal staff. Many DDS senior managers we spoke with said that turnover of experienced disability examiners has affected productivity. For example, from September 1998 to January 2006, over 20 percent of disability examiners hired during that period left or were terminated within their first year. DDS officials said the loss of experienced staff affects DDS’ ability to process disability claims workloads because it generally takes newly hired examiners about 2 years to become proficient in their role. Further, at the hearings level, SSA generally experienced shortfalls in ALJs and support staff—decision writers, staff that prepare case files for review, attorneys, and claims technicians. The number of ALJs available to conduct hearings ranged from a high of 1,087 in 1998 to a low of 919 in 2001, ending at 1,018 in 2006. Although SSA has had fewer than 1,100 ALJs over the last 10 years, in May 2006, SSA’s Commissioner noted that the agency requires no less than 1,250 ALJs to properly manage its current pending workload. With respect to support staff, numbers ranged from a high of 5,500 in 1999 to a low of 4,700 in 2006. Although SSA managers and judges would like to see a ratio of 5.25 support staff per ALJ, the actual ratio has more often been lower, ranging from a ratio of 4.59 in 1997 to 4.12 in 2006. Only in 2001, when the number of ALJs was at its lowest point, was the target ratio achieved. Finally, a number of initiatives undertaken by SSA to improve the disability process and potentially remedy backlogs have faltered for a variety of reasons, including poor planning and execution. In fact, some initiatives had the effect of slowing processing times by reducing staff capacity, increasing the number of appeals, or complicating the decision process. Several other initiatives improved the process, but were too costly and subsequently abandoned. This was the case for several facets of a major 1997 initiative, known as the “Disability Process Redesign,” which sought to streamline and expedite disability decisions for both initial claims and appeals. In the past, we reported that various initiatives within this effort became problematic and were largely discontinued due to their ineffectiveness and high cost. Further, implementation of an electronic system enhanced some aspects of the disability claims process, but also caused delays due to systemic instability and shutdowns at the DDS and hearings offices. Further, the “Hearings Process Improvement” initiative, implemented in 2000, involved reorganizing hearing office staff and responsibilities with the goal of reducing the number of appeals. However, many of the senior SSA officials we spoke with expressed the opinion that this initiative left key workloads unattended and was therefore responsible for dramatic increases in delays and processing times at the hearings level. In addition to disability claims backlogs and increased processing times, other aspects of SSA’s service delivery at field offices have declined in recent years. From fiscal year 2002 to 2006, the average time customers waited in a field office to speak with an SSA representative increased by 40 percent from 15 to 21 minutes. In fiscal year 2008, more than 3 million customers waited for over 1 hour to be served. Further, SSA’s 2007 Field Office Caller Survey found that 51 percent of customers calling selected field offices had at least one earlier call that had gone unanswered. Because SSA based its results only on customers who were ultimately able to get through, the actual percentage of customers that had unanswered calls was likely even higher. Overall these factors may have contributed to a 3 percent drop in SSA’s overall customer satisfaction, from 84 percent in fiscal year 2005 to 81 percent in fiscal year 2008. Declines in field office service delivery measures coincided with a period of staff turnover and losses agency wide. From fiscal year 2005 to 2008, SSA experienced a 2.9 percent reduction in total employees and a 4.4 percent reduction in field office employees. At the same time, employees and managers reported high levels of stress. We asked 153 employees at 21 offices to rate the stress they experienced in attempting to complete their work in a timely manner and 65 percent reported feeling stress to a great or very great extent on a daily basis, while 74 percent of office managers described high levels of stress. Declines in service delivery measures also coincided with increased workloads. For example, the number of annual field office visitors increased by about 2.5 million customers, from 41.9 million in fiscal year 2006 to 44.4 million in fiscal year 2008. In addition, SSA’s field offices experienced growth in other types of workloads. Between 2005 and 2008, SSA performed more work related to managing beneficiary rolls and assigning Social Security numbers. Finally, the work SSA performs on behalf of other federal agencies has grown. For example, new elements of the Medicare prescription drug program and new state laws requiring federal government verification of work authorization are resulting in additional work and field office visits. SSA projects an increase in disability claims and other workloads over the coming years while at the same time anticipates the retirement of many experienced workers. Specifically, SSA projects: An overall 13 percent increase in retirement and disability claims from fiscal years 2007 to 2017. A growth of 22 percent in the number of retirement and disability beneficiaries from 2007 to 2015. That nearly 40 percent of its current workforce will be eligible to retire in 5 years and 44 percent will retire by 2016. SSA continues to take steps to address disability claims backlogs and service delivery challenges, including efforts to improve its disability claims process, redistribute workloads across field offices, and develop a plan for addressing future growth in disability and retirement claims. Some of these efforts have been hampered by poor planning while others are too recent to evaluate. SSA has pursued a number of initiatives to improve the overall efficiency and effectiveness of its disability claims process. For example, the DSI initiative, piloted in 2006, was designed to produce correct decisions on disability claims as early in the application process as possible, with the expectation that DSI would reduce both appeals of denied claims and future backlogs. The plan involved several envisioned changes to improve the disability determination process. However, results of the initiative by early 2007 were mixed. (See table 1 for examples of these initiatives and their results.) In general, we found that implementation of these and other DSI initiatives were hampered by rushed implementation, poor communication, and inadequate financial planning. Overall, the DSI initiatives cost more than the agency had originally estimated. The future of DSI currently remains uncertain. While the Quick Disability Determination will likely be implemented nationwide, SSA suspended national roll-out of most portions of the DSI initiative, and issued a proposed rule to suspend the Federal Reviewing Official and Medical and Vocational Expertise initiatives in the Boston region. SSA has said that it will continue to conduct an evaluation of DSI initiatives to determine whether they should be reinstated. Because SSA’s assessment of DSI components to date has been limited, in 2007 we recommended that SSA conduct a thorough evaluation of DSI before deciding which elements should be implemented or discontinued. SSA noted that it would continue to collect data and monitor outcomes to evaluate DSI, but that, due to constrained resources, it may not be able to collect sufficient data to ensure the reliability of the results. SSA suspended DSI, in part, to refocus on reducing its hearings backlog, which had reached critical levels. In May 2007, SSA outlined a new hearings backlog reduction plan that focuses on reducing the existing backlog and preventing its recurrence through a series of steps that employ some prior innovations and also new initiatives. However, officials we spoke with at SSA emphasized that the hearings backlog reduction plan is not meant to replace the DSI initiative but to complement it until a final decision is made regarding the future of DSI. Steps in the plan include updating SSA’s medical eligibility criteria, expediting cases for which eligibility is more clear-cut, improving hearings office capacity and performance, and other actions. Also in the plan, the Commissioner proposed dedicating $25 million to improve SSA’s electronic processing system. SSA’s efforts to reduce the hearings backlog may be supported by additional funds through recent legislation. Specifically, the American Recovery and Reinvestment Act of 2009 (ARRA) allocated $500 million to SSA to assist with processing workloads and related technology acquisitions. SSA has not yet determined how it will use this money for its various workloads. In December 2007, we recommended that SSA take the necessary steps to increase the likelihood that new initiatives will succeed, such as performing comprehensive planning to anticipate challenges of implementation, including the appropriate staff in the design and implementation stages, establishing feedback mechanisms to track progress and problems, and performing periodic evaluations. SSA agreed with the intent of this recommendation, noting that it would take necessary steps to improve the likelihood of success of future initiatives. Accordingly, we are currently evaluating the extent to which the hearings backlog reduction plan includes components of sound planning and the potential effects of the plan on the hearings backlog and other SSA operations. As part of this review, we will (1) examine the plan’s potential to eliminate the hearings-level backlog, (2) determine the extent to which the plan includes components of sound planning, and (3) identify potential unintended effects of the plan on hearings level operations and other aspects of the disability process. We expect to complete our work later this year. To address overall workloads and maintain customer service, SSA is shifting workloads to less busy offices. For example, if a field office has work demands that it cannot immediately cover, that office can request that some work be transferred to another office. Offices that have a particular expertise in that particular type of work will make themselves available, as they can process this work more quickly. These efforts likely contributed to increased productivity levels. Specifically, the average amount of work produced by field office employees increased by 2.9 percent between fiscal years 2005 and 2008. Managers also are addressing workloads by using claims processing personnel to perform the duties typically conducted by lower-graded employees, and in some cases, office managers take on duties of their employees. Such duties include answering the telephone, providing initial services to arriving customers, processing requests for new or replacement Social Security cards, and conducting some administrative duties. Although visiting customers need attention, this practice may reduce time spent on other workloads, such as claims processing or managing the office. Moreover, as we noted earlier, the stress of expanding workloads and staffing constraints can negatively impact morale. With fewer staff available, SSA has deferred some workloads, although this practice may have significant drawbacks. Specifically, SSA has focused on field office work it considers essential to its “core workloads,” such as processing new claims for Social Security benefits and issuing Social Security cards, while deferring other types of work including changes of address, changes to direct deposit information, and reviews to determine beneficiaries’ continuing eligibility for DI and SSI benefits. Reviews of continuing eligibility, however, are key activities in ensuring payment accuracy. Such reviews yield a lifetime savings for both DI and SSI of $10 for every dollar invested, according to SSA. In recent years, SSA has reduced the number of reviews conducted, citing budget limitations and an increase in core work. When reviews of benefits are delayed, some beneficiaries continue receiving benefits when they no longer qualify. SSA has used a variety of strategies to maintain adequate staffing levels overall, although it faces challenges with hiring, training and retaining staff. For example, SSA: offers recruitment, relocation, and retention bonuses to individuals with needed skills; offers workplace flexibilities; uses dual compensation waivers from the Office of Personnel Management for certain hard-to-fill positions; and developed recruiting efforts to reach out to a broader pool of candidates, including retired military and veterans with disabilities. SSA may also use ARRA money to hire additional staff to help manage some of its workloads. However, in the past, SSA has encountered obstacles that delay hiring. For example, SSA’s ability to hire sufficient ALJ’s has been hindered by the length of the Office of Personnel Management’s review process. In addition, field office managers and staff at many locations we visited stated that it typically takes 2 to 3 years for new employees to become proficient after being hired. For disability examiners, this process can take about 2 years, according to SSA staff, while at the same time turnover is high. More recently, in response to our recommendation that SSA develop a detailed service delivery plan, SSA stated that it intends to consolidate its various planning efforts into a single planning document. SSA commented that its consolidated document will, at minimum, include comprehensive plans for expanding electronic services for customers; increasing the centralization of receiving phone calls and working claims from customers while maintaining the network of local field offices; enhancing phone and video services in field offices (where applicable) and piloting self-service personal computers in the reception areas of those offices; and continuing to assess the efficiency of field offices. While a consolidated planning document will better reflect the variety of planning efforts SSA has to improve its operations, it remains unclear how SSA will manage growing workloads with its current infrastructure of approximately 1,300 field offices, while minimizing the deferral of its workloads and declines in customer service. By all accounts, the operational challenges that SSA faces are projected to become more acute in the coming years as our society ages. SSA’s aging workforce and our faltering economy may exacerbate these challenges. Over the years and across many fronts, SSA has taken numerous and varied steps to address its backlog of disability claims and its service delivery challenges, but often with mixed results or at the expense of some other key services. Funds that SSA receives through the ARRA may relieve staffing shortages and potentially improve electronic case processing, but more concerted efforts will likely be needed to get in front of the challenges ahead. We have recommended that, to increase the probability of success for any new initiatives aimed at reducing the backlog of claims, SSA focus on comprehensive planning that anticipates implementation challenges by involving key staff in design and implementation, establishing feedback loops, and performing periodic evaluations to ensure that reforms are executed effectively. We have also recommended that SSA develop a service delivery plan that addresses in detail how it will successfully deliver quality customer service in the future while managing growing work demands with constrained resources. SSA agreed that it should take necessary steps to improve the likelihood of success of future initiatives and to develop a comprehensive service delivery plan, and noted that they are taking steps toward these ends. We look forward to SSA’s progress as it moves forward with these efforts. Mr. Chairman and Members of the Subcommittee, this concludes my remarks. I would be happy to answer any questions that you or other Members of the Subcommittee may have. For further information, please contact Daniel Bertoni at (202) 512-7215 or [email protected]. Also contributing to this statement were Michele Grgich, Erin Godtland, and Jessica Orr. Advisors included Blake Ainsworth, Barbara Bovbjerg, Julianne Cutts, Shelia Drake, Cindy Fagnoni, Sal Sorbello, and Paul Wright. Roger Thomas provided legal advice. High-Risk Series: An Update (GAO-09-271, January, 2009). Social Security Administration: Service Delivery Plan Needed to Address Baby Boom Retirement Challenges (GAO-09-24, January 9, 2009). Social Security Disability: Better Planning, Management, and Evaluation Could Help Address Backlogs (GAO-08-40, December 7, 2007). Social Security Disability: Management Controls Needed to Strengthen Demonstration Projects (GAO-08-1053, September. 26, 2008). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
For years, the Social Security Administration (SSA) has experienced challenges managing a large disability workload and making timely decisions. In fiscal year 2006, SSA made about 3.7 million disability claims decisions, while over a million were awaiting a decision. Further, SSA has faced staffing challenges and difficulties managing its workloads at its network of approximately 1,300 field offices, where millions of people go to apply for disability and retirement benefits, to obtain Social Security cards, and for a host of other services. The Subcommittees on Income Security and Family Support, and on Social Security, House Committee on Ways and Means, asked GAO to address (1) key service delivery challenges facing SSA, particularly with respect to the backlog of disability claims, and (2) steps SSA is taking to address these challenges. This testimony is based primarily on reports assessing trends in disability claims processing and backlogs, steps SSA is taking to reduce the backlog, and other challenges SSA faces in meeting future service delivery needs. Certain information was updated to reflect recent legislative changes. In recent years, SSA has experienced a growing backlog of disability claims and deteriorating customer service at field offices. SSA's total backlog of disability claims doubled from 1997, reaching 576,000 in 2006, which has resulted in claimants waiting longer for final decisions. The backlog was particularly acute at the hearings level. SSA also experienced declines in field office service delivery, with average customer wait times in field offices increasing by 40 percent from 2002 to 2006, and over 3 million customers waiting more than 1 hour to be served in 2008. Two key factors likely contributed to the backlog and service delivery challenges: (1) staffing reductions or turnover of field office staff and key personnel involved in the disability claims process, and (2) increased workloads. In particular, initial applications for disability benefits grew by more than 20 percent over the past 10 years. SSA projects further increases in workloads as the baby boom generation reaches its disability-prone years and retires. SSA has taken steps to improve its disability claims process, reduce the claims backlog, and manage its field office workloads, but some efforts were hampered by poor planning and execution while others are too recent to evaluate. In 2006, SSA introduced a comprehensive set of reforms to improve the efficiency, accuracy and timeliness of the disability claims process. However, this initiative produced mixed results and many aspects were suspended to focus on the hearings backlog and other priorities. While final decisions regarding many aspects of this reform are pending, SSA outlined a new plan in 2007 that concentrates on clearing out backlogged cases at the hearings level. GAO is currently reviewing this plan as part of its ongoing work. To address overall workloads and maintain customer service, SSA has shifted workloads to less busy offices and deferred workloads it deemed lower priority. However, deferring certain workloads, such as continuing eligibility reviews, can result in beneficiaries receiving payments who no longer qualify. In response to a recent GAO recommendation, SSA agreed to develop a single service delivery plan to help it better manage future service delivery challenges. However, it remains unclear how SSA will address current and future challenges given its current service delivery infrastructure and resource constraints.
According to Census data, in 2005 an estimated 21.9 million households, or 20 percent of the 111.1 million households nationwide, were “veteran households”—that is, they had at least one member who was a military veteran. Most veteran households—about 80 percent—owned their own homes, a significantly higher percentage than was the case for other (nonveteran) households (about 64 percent). About 4.3 million veteran households rented their homes. Census data also show that renter households were more likely to be low-income than were owner-occupied households; in 2005, about 66 percent of renter households were low- income while 32 percent of homeowners were low-income. VA, through a variety of programs, provides federal assistance to veterans who are homeless, and also provides homeownership assistance, but does not provide rental assistance. One of the agency’s largest programs for homeless veterans is the Homeless Providers Grant and Per Diem program, which provides funding to nonprofit and public agencies to help temporarily shelter veterans. VA also administers eight other programs for outreach and treatment of homeless veterans. In addition to its homelessness programs, VA provides a variety of programs, services, and benefits to veterans and their families. HUD provides rental housing assistance through three major programs— housing choice voucher, public housing, and project-based. In fiscal year 2005, these programs provided rental assistance to about 4.8 million households and paid about $28 billion in rental subsidies. These three programs generally serve low-income households—that is, households with incomes less than or equal to 80 percent of their local area median incomes. Most of these programs have targets for households with extremely low incomes—30 percent or less of their area median incomes. HUD-assisted households generally pay 30 percent of their monthly income, after certain adjustments, toward their unit’s rent. HUD pays the difference between the household’s contribution and the unit’s rent (under the voucher and project-based programs) and the difference between the public housing agencies’ operating costs and rental receipts for public housing. According to our analysis of ACS data, of the 4.3 million veteran households that rented their homes, an estimated 2.3 million, or about 53 percent were low-income in 2005. As shown in table 1, the largest share of these 2.3 million households was concentrated in the highest low-income category—that is, 50.1 to 80 percent of the area median income—with somewhat smaller shares in the two lower categories. The table also shows that other renter households (that is, households without a veteran member) were even more likely to be low-income than veteran renter households. The estimated numbers of low-income veteran renter households in 2005 varied greatly by state, from some 236,000 in California—the most of any state—to less than 6,000 in each of 3 states—Delaware, Vermont, and Wyoming. The percentages of veteran renter households that were low- income in 2005 also varied considerably by state, from about 65 percent in Michigan to about 41 percent in Virginia. Further details on how these figures varied by state, including maps, can be found in appendix I. In addition, a significant proportion of low-income veteran renter households included a veteran who was elderly or had a disability. Specifically, an estimated 816,000 (36 percent of these veteran households) had at least one veteran who was elderly (that is, 62 years of age or older); and 887,000 (39 percent) had at least one veteran member with a disability. According to our analysis of ACS data, an estimated 1.3 million low- income veteran households, or about 56 percent of the approximate 2.3 million such households, had rents that exceeded 30 percent of their household income in 2005 (see table 2). These veteran renter households had what HUD terms “moderate” or “severe” problems affording their rent. Specifically, about 31 percent of low-income veteran renter households had moderate affordability problems, and about 26 percent had severe affordability problems. The remainder either paid 30 percent or less of their household income in rent, reported zero income, or did not pay cash rent. In comparison, a higher proportion of other low-income renter households had moderate or severe housing affordability problems. The extent of housing affordability problems among low-income veteran renter households varied significantly by state in 2005 (see fig. 1). The median percentage of low-income veteran renters with affordability problems nationwide was 54 percent. California and Nevada had the highest proportions of affordability problems among low-income veteran renter households—about 68 and 70 percent, respectively. North Dakota and Nebraska had the smallest—about 37 and 41 percent, respectively. A relatively small percentage of veteran households lived in overcrowded or inadequate housing in 2005. Specifically, an estimated 73,000, or 3 percent, of low-income veteran renter households lived in overcrowded housing—housing with more than one person per room—and less than 18,000, or about 1 percent, lived in severely overcrowded housing— housing with more than one and a half persons per room. In contrast, an estimated 1.5 million, or 7 percent, of other low-income renter households lived in overcrowded housing, and about 423,000, or 2 percent, lived in severely overcrowded housing. Finally, ACS data indicate that a very small share of low-income veteran renters lived in inadequate housing. ACS provides very limited information about the quality of the housing unit; the survey classifies a unit as inadequate if it lacks complete plumbing or kitchen facilities, or both. In 2005, an estimated 53,000, or 2 percent, of low-income veteran renter households lived in inadequate housing. In comparison, an estimated 334,000, or 2 percent, of other households lived in inadequate housing. HUD’s major rental assistance programs are not required to take a household’s veteran status into account when determining eligibility and calculating subsidy amounts. (Consequently, HUD does not collect any information that identifies the veteran status of assisted households.) As with other households, veterans can benefit from HUD rental assistance provided that they meet all of the programs’ income and other eligibility criteria. For example, assisted households must meet U.S. citizenship requirements and, for some of the rental assistance programs, HUD’s criteria for an elderly household or a household with a disability. When determining income eligibility and subsidy amounts, HUD generally does not distinguish between income sources that are specific to veterans, such as VA-provided benefits, and other types of income. HUD policies define household income as the anticipated gross annual income of the household, which includes income from all sources received by the family head, spouse, and each additional family member who is 18 years of age or older. Specifically, annual income includes, but is not limited to, wages and salaries, periodic amounts from pensions or death benefits, and unemployment and disability compensation. HUD policies identify 39 separate income sources and benefits that are excluded when determining eligibility and subsidy amounts. These exclusions relate to income that is nonrecurring or sporadic in nature, health care benefits, student financial aid, and assistance from certain employment training and economic self- sufficiency programs. We found that, based on HUD’s policies on income exclusions, most types of income and benefits that veteran households receive from VA would be excluded when determining eligibility for HUD’s programs and subsidy amounts. Many of the excluded benefits relate to payments that veteran households receive under certain economic self-sufficiency programs or nonrecurring payments such as insurance claims. Of the benefits included, most are associated with recurring or regular sources of income, such as disability compensation, pensions, and survivor death benefits. Of the 39 exclusions, we found that two income exclusions specifically applied to certain veteran households but, according to HUD, these exclusions are rarely used. These income exclusions are (1) payments made to Vietnam War-era veterans from the Agent Orange Settlement Fund and (2) payments to children of Vietnam War-era veterans who suffer from spina bifida. The two exclusions are identified in federal statutes that are separate from those authorizing the three major rental assistance programs. HUD does provide rental assistance vouchers specifically to veterans under a small program called the Housing and Urban Development- Veterans Affairs Supportive Housing program (HUD-VASH). Established in 1992, HUD-VASH is jointly funded by HUD and VA and offers homeless veterans an opportunity to obtain permanent housing, as well as ongoing case management and supportive services. HUD allocated these special vouchers to selected public housing agencies that had applied for funding, and VA was responsible for identifying participants based on specific eligibility criteria, including the veteran’s need for treatment of a mental illness or substance abuse disorder. Under the HUD-VASH initiative, HUD allocated 1,753 vouchers from fiscal years 1992 through 1994. HUD funded these vouchers for 5 years and, if a veteran left the program during this period, the housing agency had to reissue the voucher to another eligible veteran. According to VA officials, after the 5-year period ended, housing agencies had the option of continuing to use their allocation of vouchers for HUD-VASH, or could discontinue participation whenever a veteran left the program (that is, the housing agency would not provide the voucher to another eligible veteran upon turnover). VA stated that after the 5-year period ended, many housing agencies decided not to continue in HUD- VASH after assisted veterans left the program; instead, housing agencies exercised the option of providing these vouchers to other households under the housing choice voucher program. As a result, the number of veterans that receive HUD-VASH vouchers has declined. Based on information from VA, about 1,000 veterans were in the program as of the end of fiscal year 2006, and absent any policy changes, this number is likely to decline to 400 because housing agencies responsible for more than 600 vouchers have decided not to continue providing these vouchers to other veterans as existing participants leave the program. Congress statutorily authorized HUD-VASH as part of the Homeless Veterans Comprehensive Assistance Act of 2001. Under the act, Congress also authorized HUD to allocate 500 vouchers each fiscal year from 2003 through 2006—a total of 2,000 additional vouchers. In December 2006, Congress extended this authorization through fiscal year 2011—allocating an additional 2,500 vouchers or 500 each year. However, HUD has not requested, and Congress has not appropriated, funds for any of the vouchers authorized from fiscal years 2003 through 2007. Currently, HUD’s policies give public housing agencies and owners of project-based properties the discretion to establish preferences for certain groups when selecting households for housing assistance. Preferences affect only the order of applicants on a waiting list for assistance; they do not determine eligibility for housing assistance. Before 1998, federal law required housing agencies and property owners to offer a preference to eligible applicants to their subsidized housing programs who (1) had been involuntarily displaced, (2) were living in substandard housing, or (3) were paying more than half their income for rent. Public housing agencies were required by law to allocate at least 50 percent of their public housing units and 90 percent of their housing choice vouchers to applicants who met these criteria. Similarly, project-based owners had to allocate 70 percent of their units to newly admitted households that met these criteria. The Quality Housing and Work Responsibility Act of 1998 (QHWRA) gave more flexibility to housing agencies and project-based property owners to administer their programs, in part by eliminating the mandated housing preferences. Although it gave housing agencies and owners more flexibility, QHWRA required that public housing agencies and owners target assistance to extremely low-income households. Under QHWRA, housing agencies and owners of project-based properties may, but are not required to, establish preferences to better direct resources to those with the greatest housing needs in their areas. Public housing agencies can select applicants on the basis of local preferences provided that their process is consistent with their administrative plan. HUD policy requires housing agencies to specify their preferences in their administrative plans, and HUD reviews these preferences to ensure that they conform to nondiscrimination and equal employment opportunity requirements. Similarly, HUD policy allows owners of project-based properties to establish preferences as long as the preferences are specified in their written tenant selection plans. While HUD requires housing agencies and property owners to disclose their preferences in their administrative or tenant selection plans, HUD officials said the department does not compile or systematically track this information because public housing agencies and property owners are not required to have preferences. Most of the 41 public housing agencies we contacted used a preference system for admission to their public housing and housing choice voucher programs, but less than half offered a veterans’ preference. As shown in table 3, of the 34 largest housing agencies that administered the public housing program, 29 established preferences for admission to the program and 14 used a veterans’ preference. Similarly, of the 40 housing agencies that administered the housing choice voucher program, 34 used admission preferences, and 13 employed a preference for veterans. According to public housing agency officials, the most common preferences used for both programs were for working families, individuals who were unable to work because of age or disability, and individuals who had been involuntarily displaced or were homeless. Of course, veterans could benefit from these admission preferences if they met the criteria. Some of the public housing agencies we contacted offered veterans’ preferences because their states required them to do so. Other housing agency officials told us they offered a veterans’ preference because they believed it was important to serve the needs of low-income veterans since they had done so much for the well-being of others. Public housing agencies that we contacted that did not offer a veterans’ preference gave various reasons for their decisions. Some officials told us that the housing agency did not need a veterans’ preference because veteran applicants generally qualified under other preference categories, such as elderly or disabled. One housing agency official we contacted said a veterans’ preference was not needed because of the relatively small number of veterans in the community. According to all of the performance-based contract administrators we contacted, owners of project-based properties that they oversee generally did not employ a veterans’ preference when selecting tenants. Ten of the 13 largest contract administrators told us, based on their review of property owners’ tenant selection plans, that owners of project-based properties generally did not employ preferences for any specific population. Officials from the remaining three contract administrators said they were aware of some property owners offering preferences to individuals who had been involuntarily displaced, working families, or those unable to work because of age or disability. However, all the contract administrators we contacted either said that property owners did not use preferences or agreed that the use of preferences, including a veterans’ preference, among owners of properties with project-based assistance was limited. HUD officials to whom we spoke also stated, based on their experience with tenant selection plans, that the use of preferences at project-based properties likely was infrequent. Low-income veteran renter households were less likely to receive HUD rental assistance than other households. As shown in table 4, of the total 2.3 million veteran renter households with low incomes, about 250,000 (or 11 percent) received HUD assistance. In comparison, of the 22 million other renter households with low incomes, 4.1 million (about 19 percent) received HUD assistance. (As noted previously, although HUD is the largest provider of federal rental housing assistance to low-income households, it is not the sole source of such assistance. Thus, these percentages likely understate the actual share of all eligible veteran renter households that receive federal rental assistance.) The reasons why other households were nearly twice as likely as veteran households to receive HUD assistance are unclear. However, based on our analyses and discussions with agency officials, we identified some potential explanations. For example: As previously noted, although a significant proportion of low-income veteran households face affordability problems, an even larger proportion of other (nonveteran) households face more severe affordability problems. Thus, the level of veteran demand for rental assistance may be lower than that of nonveteran households. Also as previously noted, HUD rental assistance programs do not take veteran status into account when determining eligibility, and most public housing agencies and property owners do not offer veterans’ preferences. As a result, these policy decisions likely focus resources on other types of low-income households with housing needs. Although low-income households generally are eligible to receive rental assistance from HUD’s three programs, statutory requirements mandate that a certain percentage of new program participants must be extremely low income. These targeting requirements may lead to a higher share of HUD rental assistance going to nonveteran households because veteran households generally are less likely to fall within the extremely low-income category. The estimated 250,000 veteran households that received HUD rental assistance in 2005 constituted about 6 percent of all HUD-assisted households. The housing choice voucher program served the largest number of veteran households, followed by the project-based program, and public housing (see fig. 3). However, a slightly higher proportion of veteran households participated in the public housing program (6.9 percent) than participated in the voucher (5.7 percent) and project-based (5.2 percent) programs. We found some similarities in the demographic characteristics of veterans and other assisted households we analyzed. For example: Compared with other assisted households, HUD-assisted veteran households were as likely to be elderly. Specifically, in fiscal year 2005, about 75,000, or 30 percent, of assisted veteran households were elderly, and about 1.3 million, or 31 percent, of other assisted households were elderly. HUD-assisted veteran households were more likely to have a disability. In fiscal year 2005, HUD provided assistance to about 88,000 veteran households with a disability, or about 34 percent of assisted veteran households. In comparison, 1.2 million or 28 percent of other assisted households had a disability. Our August 2007 report contains additional information on the demographic and income characteristics of veteran and nonveteran households, as well as the extent to which HUD programs take veteran status into account when determining eligibility and subsidy amounts. Madam Chairwoman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact David G. Wood at (202) 512-8678 or [email protected]. Contact points from our Office of Congressional Relations may be found on the last page of this statement. Individuals making key contributions to this testimony included Marianne Anderson, Michelle Bowsky, Daniel Garcia-Diaz, John T. McGrail, Josephine Perez, and Rose Schuville. The estimated numbers of low-income veteran renter households in 2005 varied greatly by state, as shown in figure 4. The estimated median number of low-income veteran renters in any state was about 34,000. California had significantly more low-income veteran renter households than any other state—more than 236,000, or about 10 percent of all such households nationwide—followed by Texas with about 142,000, and New York with about 135,000. The states with the smallest number of low-income veteran households were Vermont, Delaware, and Wyoming with less than 6,000 each. As shown in figure 5, the percentages of veteran renter households that were low-income in 2005 also varied considerably by state. Michigan had the highest percentage—about 65 percent of its veteran renter households were low income, while Virginia had the lowest—about 41 percent.
Veterans returning from service in Iraq and Afghanistan could increase demand for affordable rental housing. Households with low incomes (80 percent or less of the area median income) generally are eligible to receive rental assistance from the Department of Housing and Urban Development's (HUD) housing choice voucher, public housing, and project-based programs. However, because rental assistance is not an entitlement, not all who are eligible receive assistance. This testimony, based on a 2007 report, discusses (1) the income status and demographic and housing characteristics of veteran renter households, (2) how HUD's rental assistance programs treat veteran status (whether a person is a veteran or not) and whether they use a veteran's preference, and (3) the extent to which HUD's rental assistance programs served veterans in fiscal year 2005. The 2007 report discussed in this testimony made no recommendations. In 2005, an estimated 2.3 million veteran renter households had low incomes. The proportion of veteran renter households that were low income varied by state but did not fall below 41 percent. Further, an estimated 1.3 million, or about 56 percent of these low-income veteran households nationwide, had housing affordability problems--that is, rental costs exceeding 30 percent of household income (see map for state percentages). Compared with other (nonveteran) renter households, however, veterans were somewhat less likely to be low income or have housing affordability problems. HUD's major rental assistance programs are not required to take a household's veteran status into account when determining eligibility and calculating subsidy amounts, but eligible veterans can receive assistance. The majority of the 41 largest public housing agencies that administer the housing choice voucher or public housing programs had no veterans' preference for admission. The 13 largest performance-based contract administrators that oversaw most properties under project-based programs reported that owners generally did not adopt a veterans' preference. In fiscal year 2005, an estimated 11 percent of all eligible low-income veteran households (at least 250,000) received assistance, compared with 19 percent of nonveteran households. Although the reasons for the difference are unclear, factors such as differing levels of need for affordable housing among veteran and other households could influence the percentages.
The departments of the Army, Navy, and Air Force have a variety of statutory authorities that allow them to accept payments in the form of in- kind renovation or construction of facilities. For example, in fiscal year 2013, DOD used authority provided by sections 2667 and 2668 of Title 10 of the United States Code to lease or issue easements relating to domestic real property under their control or jurisdiction in exchange for payment in the form of in-kind construction or renovation projects. A description of these authorities is provided in table 1. For leases, the services are required under section 2667 of Title 10 to receive payments in an amount that is not less than the fair-market value of the property interest, as determined by the military department Secretary. Concerning cash payments received in exchange for leases and easements, section 2667 provides that, generally, money rentals must be deposited into a special account in the U.S. Treasury and must be appropriated before they can be used. Once appropriated, section 2667 provides that at least 50 percent of the funds shall be available for use only at the installation where the leased property is located. Overseas in-kind payment projects are subject to specific bilateral agreements and statutory authorities. Bilateral agreements include efforts to relocate U.S. forces and consolidate infrastructure being used by DOD. There are also other agreements between the United States and host nations to defray some of the costs of stationing U.S. forces overseas, and to support installations that will continue to be used by the United States—known as enduring installations—through the use of host-nation resources. According to DOD real-property management officials, installation personnel are generally responsible for selecting domestic in-kind payment projects based on the needs of the installation and its chain of command. Each installation has a list of unfunded construction and renovation projects that were not included in the military services’ budget submissions to the Office of the Secretary of Defense. The military services generally allow each installation to decide which projects from these lists should be considered for in-kind payment projects. The responsibility for managing in-kind payment projects is generally shared between installation personnel and the services’ respective real-property management offices or agencies. Within the Navy, the Secretary of the Navy has delegated certain real-property management responsibilities to Naval Facilities Engineering Command. Specifically, Naval Facilities Engineering Command, subject to certain requirements, is authorized to grant, execute, amend, administer, and terminate all instruments granting the use of Navy-controlled real property, to include real-estate transactions using in-kind payment projects. Within the Air Force, the Secretary of the Air Force has delegated certain real-property management responsibilities to the Air Force Civil Engineer Center. The Air Force Civil Engineer Center is responsible for acquiring, disposing of, and managing all Air Force-controlled real property. Within the Army, the Chief of Engineers is the principal advisor to the Secretary of the Army for policy formulation related to real property. According to Army officials, the Secretary of the Army generally has delegated responsibilities to the Army Corps of Engineers for execution of a variety of real-estate transactions, including transactions that include in-kind payments. DOD reported that the military services initiated 137 in-kind payment projects in Asia, Germany and the United States during fiscal year 2013 with an estimated value of at least $1.8 billion. In Asia, DOD reported initiating 105 in-kind construction and renovation projects with a total value that DOD estimated to be at least $1.6 billion. Of the 105 projects in Asia, 31 are in Korea with an estimated value of $1.57 billion and 74 are in Japan—32 of which have an estimated value of $264 million. In Germany, DOD reported initiating 3 in-kind payment projects with a total value that DOD estimated to be almost $20.7 million. For domestic locations, DOD reported initiating 29 in-kind construction and renovation projects with a total value that DOD estimated to be $18.6 million. Of the 29 domestic projects, the Navy initiated 22 and the Air Force initiated 7. The Army and Marine Corps did not initiate any domestic in-kind payment projects in fiscal year 2013. Table 2 summarizes the number and value of in-kind payment projects by location and also highlights the most frequently reported purpose for which the projects were used. Appendixes II through V provide more detailed information for each in- kind payment project initiated by DOD in fiscal year 2013. The military services’ real-property management officials reported as many as four advantages to accepting in-kind payment projects rather than cash payments in domestic real-estate transactions, and officials from the three services reported one disadvantage. Installations can generally obtain facilities more quickly. According to officials of the Army Corps of Engineers, Air Force Civil Engineer Center, and Naval Facilities Engineering Command, in-kind payment projects can be advantageous because the value received does not need to be re- appropriated and can be immediately available to the installation. While congressional notification is required if the estimated annual fair-market value of a lease or easement exceeds $750,000, the law does not require appropriation of the funding for in-kind payment projects, so the benefits may be realized sooner. For example, in 2008 at Nellis Air Force Base, Nevada, the Air Force entered into an agreement with the city of North Las Vegas to allow construction of a wastewater treatment plant on 41 acres of land leased from Nellis. As part of its payment to the base, the city agreed to fund the construction of a new $27 million fitness center. According to Air Force Civil Engineer Center officials, the base had a longstanding requirement for an updated fitness center but was unable to secure funding for it. Once the agreement was signed, the city immediately began construction and the base was able to open the new fitness center in 2012. By contrast, if an installation receives cash payments, section 2667 requires the immediate deposit of all cash into a special account in the U.S. Treasury where it is subject to the appropriation process before the installation can use it. Officials stated that it can take up to a year between the deposit of the cash payments and the funds being provided to the installation. The originating installation is more likely to receive100 percent of the negotiated payments for the real property interest. According to Army Corps of Engineers, Air Force Civil Engineer Center and Naval Facilities Engineering Command officials, although the receipt of in-kind payment projects is at the discretion of the respective military department Secretary, the military department secretaries generally allow the installations where the real-property is located to receive 100 percent of any negotiated in-kind payments resulting from real property agreements. Conversely, section 2667 states that, subject to appropriations, installations are guaranteed to receive only 50 percent of any deposited cash payments, and can receive the other 50 percent only at the discretion of the respective military department secretary. In-kind payments are not governed by similar restrictions, and, as such, are more likely to result into the installation receiving 100 percent of the value generated by its real estate transactions. Installations may have the opportunity to potentially receive infrastructure improvements worth more than the fair-market value of the property interest. Air Force Civil Engineer Center and Naval Facilities Engineering Command officials stated that the nature of a potential developer’s business or expertise and economies of scale may allow the developer to provide in-kind payment projects worth more than the fair-market value of the property interest by obtaining construction materials or labor at a below-market cost. For example, in 2004 the Navy entered into an agreement with a real-estate development firm to redevelop the Moanalua Shopping Center at the Pearl Harbor Naval Complex, Pearl Harbor, Hawaii. As part of the agreement, the Navy conveyed the existing shopping center to the developer as well as development rights for up to another 15,000 square feet of new commercial market space. As payment to the Navy, the developer was required to demolish over 40,000 square feet of existing space and to develop 40,000 square feet of new administrative space to be used as a Navy community support services center. A 2004 Navy analysis valued these demolition and construction projects at about $21 million—which was about $8 million more than the appraised market value of the property conveyed by the Navy. Installation officials generally have more flexibility in real-estate transactions. According to officials of the Air Force Civil Engineer Center, in-kind payment projects offer installation officials greater flexibility in executing real-estate transactions, especially in cases where a developer does not have cash readily available at the time the real-estate transaction is executed. Also, allowing in-kind payment projects can increase the potential pool of developers, the officials stated. For example, in-kind payment projects may be the only way to secure adequate value from non-profit or charitable organizations that may not have adequate cash resources but have the ability to provide goods and services. These same military service officials reported that one common disadvantage to accepting in-kind payment projects rather than cash payments is the amount of additional administrative work and oversight needed to execute the in-kind agreements. Army Corps of Engineers, Air Force Civil Engineer Center, and Naval Facilities Engineering Command officials identified three scenarios in which in-kind payment projects likely will require additional work and oversight. Additional work and oversight to ensure that the transaction complies with statutory requirements. Several statutes govern the receipt of in- kind payment projects, so installation personnel conduct reviews to ensure that the transactions follow appropriate financial and administrative procedures as required by law. Additional work and oversight to ensure that the developer is complying with the terms of the transaction. For example, installation personnel work to properly monitor construction and renovation progress to ensure that the developer is adequately providing any agreed-upon services over the terms of the agreement. Additional work and oversight to ensure that in-kind payment projects are properly valued over time. Some transactions involving in-kind payment projects, such as enhanced-use leases, can last as long as 50 years and the in-kind payment projects may continue over these 50 years. As a result, installation personnel will be responsible for the extra work to value in-kind payment projects during the 50-year lease. By contrast, cash value is known and requires no extra work to value. To address the disadvantage of accepting in-kind versus cash payments, some of the military services have implemented policy changes for accepting cash. For example, the Army has issued guidance stating that cash payments are preferred over in-kind payment projects. According to the Army memorandum, leases typically involve only cash payments because procedures to properly value and assure receipt of in-kind payment projects over a lease’s terms can be administratively burdensome. This memorandum was issued partly in response to our 2011 work that found problems with the adequacy of those procedures. Army officials stated that in-kind payment projects may still be accepted, however, if installation officials determine that accepting them is in the best interest of the installation and they document the basis for that determination. Approaching the disadvantage from a different angle, the Air Force issued guidance that requires the return of 100 percent of the net proceeds from cash payments—up to $1 million—to the originating installation. This policy negates the advantage in-kind payment projects have in assuring that the originating installation receives 100 percent of the negotiated value of the in-kind payment projects for the real-property interest since the originating installation receives the full amount of the cash payment up to $1 million. According to Air Force officials, the policy was implemented to not only eliminate some of the administrative burden of accepting in-kind payment projects but also to reward installations for finding ways to reduce infrastructure costs by identifying alternative funding sources to military construction appropriations. Our review of various service policies governing real-estate transactions identified that all three services have issued guidance requiring that the cumulative value of in-kind payment projects reflect the fair-market value of the real-property interest in real-estate transactions and follow similar procedures to value domestic in-kind payment projects to ensure the receipt of fair- market value. Based, in part, on prior GAO recommendations, each of the services has issued guidance to require the determination of fair-market value for real-property assets included in real-estate transactions and have provided instructions on how to determine the fair-market value. For example, to determine the fair- market value of the real-property interest, the Army, Air Force and Navy use certified real-estate appraisers to determine the full market value of their real-property interests and require that the appraisals be conducted in accordance with the Uniform Standards of Professional Appraisal Practice prior to the finalization of any real-estate transaction. However, our review of service-level guidance governing real-estate transactions did not identify any specific steps to follow for valuing in-kind payment projects. Officials from the offices of the Army Corps of Engineers, Air Force Civil Engineer Center, and Naval Facilities Engineering Command confirmed that no specific guidance is available, but described similar broad procedures and documentation requirements that they used for determining the value of their in-kind payment projects. Officials indicated that installation public-works personnel, with the aid as needed of the respective service’s regional real-property management office, generally were responsible for determining the value of the in-kind payment projects. In some cases, installation personnel prepare formal requirements documents, such as a DOD form 1391 or some other form of a statement of work, or independent government cost estimate to document the value of the in-kind work to be performed. In other cases, installation personnel review and validate a contractor’s cost estimates for valuing certain projects instead of obtaining an independent cost estimate. Once the cost estimate is developed, installation public-works personnel, with the aid as needed of the respective service’s regional real-property management office, use the cost estimate to negotiate with the lessee for the completion of the work. If the developer and installation officials agree to the value of the work and that the value of the work is within the range of fair-market value of the real-estate interest, the parties execute the agreement to document the services to be performed as an in-kind payment project in lieu of cash. Table 3 describes the 29 domestic projects and lists which type of procedures the service used to document the value of the in-kind payment projects. To confirm that the installations used the cost- estimating methods described for valuing in-kind payment projects by service officials, we collected and reviewed the documentation available that the Air Force and Navy used to value the domestic in-kind payment projects initiated during fiscal year 2013. All 29 projects had documentation available showing how the cost estimates were established and that the services had processes in place to value in-kind payment projects. We are not making any recommendations in this report. We provided DOD with a draft of this report for review. DOD provided technical comments on our findings, which we have incorporated where appropriate. We are sending copies of this report to appropriate congressional committees and to the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; and the Director of the Office of Management and Budget. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512- 4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. To identify the in-kind payment projects that DOD reported it initiated during fiscal year 2013, we requested information and compiled data on in-kind construction and renovation projects for fiscal year 2013, their estimated value in U.S. dollars, the source of the in-kind payment project, the agreement or statutory authority, and their purpose and need from DOD components—U.S. Pacific Command; U.S. European Command; the Departments of the Army, Navy (including the Marine Corps), and Air Force. To determine whether a project was an in-kind payment project, we defined “in-kind construction and renovation projects” in this report as those resulting from certain host-nation support programs or from transactions (whether domestic or overseas) in which DOD provides goods, services, real property, or an interest in real property (including, but not limited to, a leasehold or easement) in exchange for compensation, and in which any part of that compensation is provided in the form of construction or renovation services. This definition of “in-kind construction and renovation projects,” which was also used in our 2014 report, is broader than the concept of in-kind “payments” for residual value or received in-lieu-of cash compensation as part of a domestic agreement with a third party because the definition includes host-nation support for installation facilities overseas. Voluntary contributions made by a host nation for the purpose of defraying costs to station, maintain, and train U.S. military forces in its country do not constitute a payment or obligate a host nation to make payments to the United States. We excluded from our definition of “in-kind construction or renovation projects” gifts, sustainment projects (e.g., regularly scheduled maintenance and inspections), and cash sales or rent used to finance construction or renovation. To determine that a project was initiated in 2013, we also re-used our 2014 report’s definition for “initiated” as that point at which the party responsible for completing the construction or renovation project received an official notice that allowed them to proceed with the project. To summarize the purposes of the projects, we used the real-property system classification code reported by DOD for each project as part of their data submission to us. The first digit of the code represents the facility class, and we used the facility class to represent the purpose of the projects. We corroborated the project data that each DOD component submitted by requesting that the component provide the supporting documentation where the data was obtained. We attempted to obtain supporting documentation for all the data elements for all of the domestic and overseas projects. We were able to obtain corroborating documentation for all of the domestic projects and the projects in Germany. The Office of the Assistant Secretary of Defense for Energy, Installations and Environment also reviewed the information the DOD components provided to GAO on in-kind construction and renovation projects for fiscal year 2013 for consistency and to ensure that the data were for projects that we considered “in-kind payment projects.” Lastly, after we compiled the project data, we provided each component with an additional opportunity to review and confirm the data. By taking these steps, we determined that the reported project data for the domestic projects and the projects in Germany were generally sufficiently reliable for the purposes of this report. However, numerous projects in Korea and Japan were executed under bilateral agreements or as part of voluntary host nation programs where the host nation manages the programming and execution processes and supporting documentation for these projects was not readily available in English or the specific project costs incurred by the Governments of Korea and Japan were, according to DOD officials, not required to be disclosed to the United States. In the absence of supporting documentation, we asked DOD to describe, to the extent possible, the source or calculation method for the estimated value of the project cost, initiation date and the real-property system classification code. However, DOD officials had a limited explanation of the source of some of this basic project information because of international agreements and host-nation requested relocation projects that, according to DOD officials, do not require disclosure of specific costs by the host nation. For instance, DOD officials reported that for the 74 in- kind payment projects in Japan, they did not know the specific basis used by Japanese officials for calculating the value for 32 of the projects in Japan and did not have any information on the value for another 42 projects. The report notes instances where we were unable to determine the reliability of reported data for various projects in Japan and Korea Appendixes II through V provide a listing and detailed information for each in-kind payment project initiated by DOD in fiscal year 2013. To describe the potential advantages and disadvantages of accepting in- kind payments instead of cash for domestic real-estate transactions, we reviewed the services’ policies and procedures regarding the types of payments (cash or in-kind payment project) to be included in real-estate transactions and Army and Air Force guidance that informs when cash or in-kind payment projects are advantageous or not advisable. We also interviewed officials from the Army Corps of Engineers, Air Force Civil Engineering Center, and Naval Facilities Engineering Command to discuss the factors they consider in determining whether to accept cash or in-kind payment projects, the rationale for preferring one type of payment (cash or in-kind payment project) over another, and the advantages and disadvantages in terms of administration and costs of executing cash versus in-kind payment projects. To identify the extent to which the military services have developed and implemented guidance and procedures to value in-kind payment projects to ensure the receipt of fair-market value for domestic projects, we compared the military services’ guidance governing real-estate transactions to DOD’s guidance for leasing and obtaining fair-market value of its real-estate interest. We then reviewed the military services’ established policies and procedures where available to determine if their procedures would ensure receipt of fair-market value. Furthermore, we obtained and reviewed the in-kind payment project agreements and supporting documentation, such as the lease exhibits, site and task work orders, and the real-estate appraisal for the 29 domestic in-kind payment projects initiated during fiscal year 2013 to review whether the military services were following the established procedures they described for valuing these specific in-kind payment projects. In addition to the previously mentioned DOD offices, we also interviewed officials from the Office of the Assistant Secretary of Defense for Energy, Installations and Environment, Army Office Assistant Chief of Staff for Installation Management, Office of Assistant Secretary of the Air Force (Installations, Environment and Logistics), and Office of the Assistant Secretary of the Navy (Energy, Installations and Environment) to determine the procedures and documentation requirements used by the services to value in-kind payment projects. We conducted this performance audit from August 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DOD reported that the military services initiated 31 in-kind payment projects involving construction or renovation in Korea in fiscal year 2013 with a value of about $1.6 billion. The projects support various DOD purposes throughout the Republic of Korea, but mostly for construction of new facilities at United States Army Garrison Humphreys. Most of the projects (23 of 31) initiated in Korea were either for improvements to or the construction of operations and training facilities (14 projects) or for housing and community facilities (9 projects). Table 4 provides a summary of the intended purposes and value of in-kind payment projects initiated in Korea in fiscal year 2013. We found that the reported values for 26 of the 31 projects were of undetermined reliability because of a lack of documentation. Table 5 provides specific information on the military service responsible for the project, location, purpose, type of agreement, source of in-kind payment project, and estimated value for the 31 in-kind payment projects involving construction or renovation initiated in Korea during fiscal year 2013. DOD reported that the military services initiated 74 in-kind payment projects involving construction or renovation in Japan in fiscal year 2013, 32 of which were valued at $264 million. The projects supported various purposes for the U.S. military’s presence in the country. A majority of the projects initiated in Japan (39 of 74) were for improvements to or construction of utility and ground improvement infrastructure (23 projects) or for operations and training facilities (16 projects). Table 6 provides a summary of the intended purposes, value, and number of in-kind payment projects initiated in Japan in fiscal year 2013. We found that the reported estimated values for the in-kind payment projects in Japan were of undetermined reliability because of a lack of documentation. Table 7 provides specific information on the military service responsible for the project, location, purpose, type of agreement, source of in-kind payment project, and estimated value for the 74 in-kind payment projects involving construction or renovation initiated in Japan during fiscal year 2013. DOD reported that the Army initiated 3 in-kind construction and renovation projects in Germany with a value of over $20 million. The in- kind payment projects in Germany were initiated as compensation from the government of Germany to the United States for improvements the United States made to facilities that were being returned to the government of Germany. Table 8 provides specific information on the military service responsible for the project, location, purpose, type of agreement, source of in-kind payment project, and estimated value for the three in-kind payment projects involving construction or renovation initiated by the Army in Germany during fiscal year 2013. For the projects underway at U.S. locations, DOD reported that the military services initiated 29 in-kind construction and renovation projects with a total value estimated by DOD of $18.6 million. The Navy had the most in-kind payment projects (22), and all resulted from the lease of DOD property at Navy bases in Virginia, California, Florida, and West Virginia. A majority of the Navy projects (12) were for either improvements to or construction of research, development, test, and evaluation facilities (6 projects) or maintenance and production facilities (6 projects). The Air Force initiated seven in-kind payment projects, all resulting from either granting an easement or leasing DOD property at Eglin Air Force Base. All of the Air Force projects were for either improvements to or construction of administrative facilities or utility and ground infrastructure. The Army and Marine Corps did not initiate any domestic in-kind payment projects in fiscal year 2013. Table 9 provides a summary of the intended purposes and value of in-kind payment projects initiated in the United States in fiscal year 2013. Table 10 provides specific information on the military service responsible for the project, location, purpose, agreement, type of agreement, and estimated value for the 29 in-kind payment projects involving construction or renovation initiated in the United States during fiscal year 2013. GAO staff members who made key contributions to this report were Laura Durland, Assistant Director; Bonita Anderson; Shawn Arbogast; Pat Donahue; Dave Keefer; Richard Powelson; and Michael Willems.
DOD uses in-kind payments domestically and overseas in its real-estate transactions as an alternative to appropriated funds to help manage a global real-property portfolio that includes more than 555,000 facilities worldwide. In-kind payments refer to DOD receiving construction and renovation services rather than cash as payment for DOD providing goods, services, real property, or an interest in real property. The National Defense Authorization Act for Fiscal Year 2013 includes a provision for GAO to review the use of in-kind projects. This report identifies (1) the in-kind payment projects that DOD reported it initiated during fiscal year 2013, and discusses the potential advantages and disadvantages of accepting in-kind payments instead of cash for domestic real-estate transactions; and (2) the extent to which the military services have developed and implemented guidance and procedures to value domestic in-kind payments to ensure the receipt of fair-market value. The Act also provided for a listing of facilities constructed or renovated with the use of in-kind payments, and additional information, which GAO provides in appendixes to this report. To conduct this work, GAO collected in-kind project data from the military services, reviewed DOD and military service policy and project documentation, and interviewed military officials. GAO is not making recommendations in this report. DOD provided technical comments on the findings, which GAO has incorporated where appropriate. The Department of Defense (DOD) reported to GAO that 137 in-kind projects involving construction or renovation, valued at about $1.8 billion, were initiated in Korea, Japan, Germany and the United States during fiscal year 2013. In-kind payments involve non-cash options, such as renovating or constructing a facility. The table below summarizes the number and value (estimated costs to be incurred) of the projects by country and highlights the most frequently reported purpose for which the projects were used. Number, Value, and Purpose of In-Kind Construction and Renovation Projects Initiated by DOD during Fiscal Year 2013 Source: GAO summary of Department of Defense (DOD) data. | GAO-15-649 a Subtotals for Korea and Japan and totals for DOD are of undetermined reliability because supporting documentation was not readily available in English, or, according to DOD officials, the costs incurred by the Governments of Korea and Japan were not required to be disclosed to the United States. The military services' real-estate and real-property management officials discussed four potential advantages to accepting in-kind payments rather than cash payments in domestic real-estate transactions, and identified one potential disadvantage. The reported advantages included generally obtaining facilities more quickly than through the appropriations process and installations receiving 100 percent of the value of negotiated payments received from a real-property interest as opposed to cash payments where only 50 percent of the value is guaranteed to be provided back to the installation. However, some services have implemented policy changes such as returning 100 percent of the value of the real-property interest back to the installation—up to $1 million when accepting cash payments. One reported disadvantage was in-kind payments may require additional administrative work and oversight compared to cash payments. GAO's review of service policies governing real-estate transactions identified that all services have issued guidance requiring that the cumulative value of in-kind projects reflect the fair-market value of the real-property interest in real-estate transactions. Each of the Services reported similar broad procedures for valuing in-kind payment projects and GAO's review of the documentation for the projects initiated in fiscal year 2013 confirmed that the installations were following the described procedures.
Each insurance company is chartered under the laws of a single state, known as its state of domicile. Although an insurance company can conduct business in multiple states, the regulator in the insurer’s state of domicile is its primary regulator. States in which an insurer is licensed to operate, but in which it is not chartered, typically rely on the company’s primary regulator in its state of domicile to oversee the insurer. Regarding Year 2000 issues, NAIC has emphasized this approach by encouraging each state to focus its Year 2000 oversight efforts on its domiciliary companies. In total, state-regulated insurance entities wrote an estimated $895.2 billion in direct premiums sold nationally during 1998. Life/health and property/casualty insurance companies represent the key industry segments, accounting for 85 percent of the total direct premiums written in that year. HMOs; HMDIs; and other entities, such as fraternal organizations and title companies, accounted for the remaining 15 percent. To update our previous assessment of the regulatory oversight of the insurance industry’s Year 2000 readiness, we interviewed NAIC officials and reviewed documentation related to NAIC’s efforts to facilitate state oversight of the industry’s Year 2000 readiness. We also reviewed available state examination reports and executive summaries covering companies’ Year 2000 preparations, which were available at NAIC. While we did not verify the accuracy of the reports, this review included well over 200 reports and summaries prepared by or on behalf of 22 states. To the extent available through NAIC, we present updates pertaining to regulatory oversight through November 1999. In addition, we conducted follow-up work of Year 2000 validation efforts at the same 17 state insurance departments on which we reported in April. Our follow-up work for the 17 states included (1) a second survey administered in July 1999 that covered their Year 2000 oversight activities, including examination efforts in the area; (2) site visits to 6 of the 17 states to interview regulatory officials and review guidelines for conducting Year 2000-related examinations as well as available reports, summaries, and workpapers covering companies’ Year 2000 preparations; and (3) additional contacts in October 1999 with regulatory officials from each of the 17 states for a final update of their Year 2000-related examination efforts. The domiciliary companies of these 17 state insurance departments collectively accounted for 76 percent of the insurance sold nationally during 1998. See appendix I for a list of the 17 states and their respective domiciled insurers’ market shares. Our review of examination-related documents was limited by restrictions at two of the states we visited and at NAIC, which had examination report summaries for the same two states. For one state, regulatory officials cited an existing law that restricted access to its examination reports and related workpapers by external parties. Regulatory officials for another state explained that, under special agreements reached with insurers prior to conducting Year 2000 examinations, their department was precluded from sharing examination-related documents with other states or external entities without the consent of the companies involved. Although one of the two states provided some limited access to their examination documents, we were unable to independently verify the adequacy of Year 2000 examination efforts for either state. To determine the status of the insurance industry’s Year 2000 readiness, we surveyed all 50 state insurance departments on the state of readiness of their domiciled companies as of September 30 and the extent of the departments’ on-site verification efforts. For each state, NAIC provided a Year 2000 contact to assist us in this survey effort. Appendix II contains a copy of the Year 2000 survey we administered to the states. With NAIC’s assistance, we obtained a 100-percent response rate from the 50 states. To obtain updated insights regarding the industry’s Year 2000 outlook pertaining to readiness and liability exposure issues, we contacted representatives of key rating companies, including A.M. Best Company, Standard and Poor’s, Moody’s Investors Service, and Weiss Ratings, Inc. We also obtained and reviewed information from (1) the Gartner Group, which is a business and technology advisory company that conducts research on the global state of Year 2000 readiness; (2) the American Academy of Actuaries, which is a public policy organization that presents actuarial analyses, comments on proposed federal regulation, and works with state officials on insurance-related issues; and (3) the Casualty Actuarial Society, which is a professional organization to advance knowledge of actuarial science applied to property, casualty, and similar risk exposures. In addition, we spoke to representatives of Milliman and Robertson, Inc., an actuarial and consulting firm, and the American Bar Association. We performed our work between June 1999 and December 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from NAIC. Its written comments, which are included in appendix III, are discussed near the end of this letter. Since March 1999, NAIC has stepped up its Year 2000 efforts by (1) issuing expanded guidance to state regulators on how to examine companies’ preparedness and (2) encouraging state regulators to do on-site validation reviews of companies with the greatest potential public impact. NAIC reports that many of the nation’s state regulators have also made substantial progress in conducting Year 2000 validation reviews. They were projected to complete, by the end of November, on-site Year 2000 reviews for 91 percent of the nationally significant companies that accounted for about 84 percent of the direct premiums written by life/health and property/casualty insurers in 1998, according to NAIC. Despite this progress, uncertainties remain regarding the extent that on- site validation reviews have been conducted for some states’ companies, including some major health insurers and other segments of the insurance industry, such as HMOs and managed care organizations. In November 1999, NAIC was still in the process of quantifying the extent to which on- site verification was conducted at some of the major health insurers that did not fall into the category that NAIC had designated as nationally significant and at the larger managed care organizations, which had not been specifically covered by NAIC’s earlier efforts. After we reported on the insurance industry’s Year 2000 readiness in March and April, 1999, NAIC stepped up its efforts to facilitate state actions to verify insurers’ reported information on their Year 2000 preparations. For example, one undertaking involved NAIC’s provision of expanded examination guidance for assessing companies’ Year 2000 preparations and related training. Another important part of NAIC’s stepped up efforts has been its initiative that was aimed at prioritizing companies for review and encouraging states to perform on-site validation reviews. With only 9 months remaining before 2000 and 5,247 state- regulated insurance companies to account for, NAIC developed a pragmatic approach of focusing on the companies with the greatest potential impact on the public if they were to experience major computer problems. NAIC also worked with the states and encouraged them to implement this focused approach. In April 1999, NAIC provided the states with an enhanced version of the Financial Examiners Handbook, which provided additional guidance for performing Year 2000 readiness reviews. According to NAIC, the guidance was borrowed from audit programs developed by the Federal Financial Institutions Examination Council for federal examiners’ reviews of the Year 2000 readiness of U.S. financial institutions and a few of the state insurance departments that had been especially active in their Year 2000 oversight. NAIC also contracted for the services of a national consulting firm to develop and provide training to help state examiners better understand the review procedures and assist them in incorporating the procedures into their examinations. According to an NAIC official, this 2-day training, which was provided during the latter part of April in Atlanta, Chicago, and Denver, was attended by examiners representing almost 20 states. Compared to the timing of guidance provided by the banking and securities regulators, such Year 2000-related guidance and training would be considered late. However, we were told that this training and guidance were timely enough to be useful for some state insurance departments because they did not start their targeted Year 2000 examination process until mid-1999. In March, NAIC’s Year 2000 Industry Preparedness Task Force launched an initiative that was intended to (1) encourage states to perform on-site validation reviews and (2) determine the extent to which the states had verified the insurance industry’s Year 2000 readiness. The initiative focused on life/health and property/casualty insurance companies that NAIC had designated as nationally significant. This designation included 1,161 companies, located in 44 states and the District of Columbia, that were responsible for almost $650 billion in total premiums written during 1998. According to NAIC information, the insurance industry is relatively concentrated, with nationally significant companies representing approximately 86 percent of the premiums written for the life/health and property/casualty segments in 1998 and 27 percent of the 4,325 companies in the two insurer segments. It is also noteworthy, however, that many insurers that far exceeded NAIC’s criteria for the level of direct premiums written were not considered nationally significant because they did not meet the second criteria of being licensed in 17 states or more. These companies tended to conduct a significant amount of business on a more localized rather than national basis. We noted, for example, that 36 life/health insurers and 195 property/casualty insurers that each wrote more than $100 million in direct premiums during 1998 were not covered by NAIC’s nationally significant designation. Over the past several months, NAIC’s Year 2000 initiative focusing on nationally significant companies has involved an ongoing, interactive process with the individual state insurance departments. In April, NAIC administered a survey to all 50 states and the District of Columbia to develop preliminary baseline information on state efforts to conduct on- site examinations of companies’ Year 2000 compliance status, particularly, the compliance of nationally significant companies. From June through August, 1999, NAIC also facilitated a series of conference calls that included members of the Year 2000 Industry Preparedness Task Force and, successively, representatives from each of the 50 states. According to NAIC officials, these conference call discussions focused on each state’s general approach to overseeing the industry’s Year 2000 preparations as well as its efforts to conduct on-site examinations to verify the Year 2000 compliance status of its domiciled insurance companies, particularly its nationally significant companies. These conference calls were a key mechanism that NAIC used to encourage states to conduct more on-site verification reviews and facilitate critical Year 2000 information-sharing among the participating states regarding, for example, licensed companies that wrote a large amount of insurance in a state but were domiciled elsewhere. Finally, these conference calls with each of the states enabled NAIC to quantify on a national basis the extent of the states’ on-site verification reviews of their nationally significant companies. To document its Year 2000 initiative, NAIC has maintained a summary schedule of all nationally significant companies with information on, among other things, whether each company had been subject to an on-site Year 2000 verification review. A company was considered to have been subject to an on-site verification review if the state of domicile, or another state where the company was licensed and doing business, indicated to the task force that a review had been completed or was scheduled to be completed by the end of September. A company was also considered to have been subject to an on-site verification review if it had been indirectly covered or would have been covered by the end of September through an on-site review of an affiliated company with which its computer system was fully integrated. On the basis of information obtained from the conference calls that were completed in August 1999, NAIC reported that 1,037 nationally significant companies were to have been subject to an on-site Year 2000 review, 106 were not to have been subject to an on-site review, and the remaining 18 discontinued operations during 1999. The task force directed additional attention to certain companies that were viewed to be of particular concern. In a few cases, for example, NAIC officials noted that a state reconsidered its original position that an on-site verification was not needed. In one situation, NAIC provided financial assistance to facilitate the Year 2000 examination of a few key nationally significant companies in a particular state. In another case, a state agreed to conduct a targeted Year 2000 examination for a company domiciled in another state that had no plans to conduct on-site verification of the company. By November 1999, NAIC reported that the number of companies that would be subject to an on-site Year 2000 review by the end of November had increased to 1,059 companies, and it reported that the remaining 84 companies would not be subject to an on-site review. NAIC officials explained that, for the most part, the task force was satisfied with the level of information available on the remaining nationally significant companies that were not to be subject to on-site verification. NAIC has also taken the position that some comprehensive surveys were thorough enough to be equivalent to an examination. As we stated in our April 1999 report and continue to believe, the use of Year 2000 examinations is a principal mechanism for verifying self-reported information and providing assurances pertaining to the Year 2000 progress and readiness of regulated institutions. The ability to provide such assurances is particularly important for the industry’s nationally significant companies and others that do a substantial amount of business. In total, the number of companies that were to be subject to an on-site Year 2000 review by the end of November represented 98 percent of the direct premiums written by nationally significant companies. Regarding the total life/health and property/casualty insurer segments, the identified coverage through NAIC’s initiative suggests that companies that accounted for at least 84 percent of the direct premiums written during 1998 had been or were to have been subject to an on-site Year 2000 review by the end of November. The extent of on-site validation for the rest of the industry (including large HMO and HMDI companies), which accounted for an additional $137 billion in direct premiums written during 1998, was still unknown as of November 1999. In October 1999, NAIC reported that the Year 2000 Industry Preparedness Task Force had recently expanded the scope of its review process to include the nation’s largest managed care organizations together with all of the Blue Cross and Blue Shield Plans, which represented some of the major health care insurers not designated as nationally significant. NAIC estimated that the companies that fall into this category represent about 80 percent of the direct premiums written for all HMOs and HMDIs in 1998. The first task force conference to collect and summarize information on the status of on-site Year 2000 reviews for these managed care organizations was held in November 1999. NAIC officials acknowledged that with the Year 2000 deadline close at hand, the task force’s main objective was to quantify the extent of on-site verification that had been completed and identify any companies that may be of regulatory concern. During 1999, most of the 17 state insurance regulators we reviewed increased their efforts to conduct targeted examinations that were aimed at verifying companies’ Year 2000 readiness. In the beginning of the year, 10 of the 17 states were in the process of conducting targeted examinations or were planning to conduct such examinations, and the remaining 7 states were either not planning to conduct such examinations or were uncertain whether they were going to conduct them, as shown in table 1. By June 1999, the 17 states were either in the process of conducting targeted examinations or indicated that they planned to conduct them. Some states that started targeted Year 2000 examinations in the middle of 1999 used expedited approaches, such as suspending their regular financial examination process to devote their examiners solely to targeted Year 2000 examinations or hiring one or more private consultants to conduct such examinations in a short period of time. By the end of September, all of the 17 states we reviewed had either completed or were in the process of completing their targeted Year 2000 examinations. Specifically, eight states had finished the fieldwork for over one-half of the companies that were targeted to be examined, but corresponding reports for many of these examinations were still pending. We were told that one state was waiting for the completion of all its examination fieldwork before issuing a single summary report for all of its domiciled companies, rather than issuing separate reports for each company. A few states projected that they would not complete their Year 2000 examination process until the end of November, leaving little time for correcting identified deficiencies before the date change. In conducting targeted Year 2000 examinations, the states generally said they used the enhanced guidance that NAIC provided in April or guidelines developed by contractors, or in some cases both guidance, to improve the quality and consistency of their validation efforts. Our review of guidelines provided by the six states we visited indicated that they covered all key areas of Year 2000 conversion cited in our Assessment Guide as well as those areas cited in the federal banking regulator examination guidelines. In turn, our review of reports available for 22 states’ targeted Year 2000 examinations indicated that the reports systematically addressed all major guideline components, and that they gave a particular emphasis to companies’ contingency planning efforts. Like the banking industry, insurers depend on date-sensitive calculations involving, for example, annuities, policy renewals, and claims processing. Recognizing their industry’s high level of date sensitivity, the nation’s banking regulators have completed multiple rounds of on-site examinations for all financial institutions under their jurisdiction.Although 6 of the 17 states we reviewed indicated that their overall goal was to conduct 1 round of targeted examinations for all of their domiciled insurance companies, the remaining 11 states had established varying goals regarding which and how many companies would be subject to targeted Year 2000 examinations. These states’ goals were to cover from 6 to 76 percent of the domiciled companies within their jurisdictions. For the most part, these goals attempted to cover the states’ nationally significant companies. Two exceptions were states that had not planned to conduct on-site examinations for more than one-half of their nationally significant companies. One state official explained that a decision was made at the commissioner’s level that the limited time and staff resources available dictated that the state focus its on-site verification efforts on a select number of key companies. Some state officials believed that insurance companies had a clear incentive to become Year 2000 ready to maintain their business in a highly competitive industry and, therefore, did not require a great deal of regulatory prodding in the area. Several state regulatory officials also explained that they believed that available self-reported information from companies was sometimes sufficient to satisfy regulatory needs, and that this information obviated the need for an on-site verification review. We found that the extent of such information available to state regulators ranged from one department that had, among other things, access to required quarterly reports of its companies’ Year 2000 progress since 1998 to one that relied primarily on company responses to a few Year 2000 surveys. The latter is of particular concern since the absence of corroborating evidence obtained through on-site verification or multiple contacts with companies to track their progress diminishes the extent of regulatory assurances about Year 2000 readiness. A few state officials also explained that some small companies were not sufficiently computer dependent (e.g., a company may use a single personal computer to conduct business) to experience major problems with the Year 2000 date change and warrant the need for an on-site verification. As we reported in April 1999, 2 of the 17 states we reviewed were comparatively more active in their efforts to ensure that insurance companies become Year 2000 ready. These states opted to forgo on-site examinations for some of their domiciled companies because of a comfort level that officials explained was derived from their close tracking of or continuous interaction with certain companies over time. In some cases, they chose instead to conduct targeted Year 2000 examinations for certain insurance companies that were licensed to write business but were not domiciled in the state. As of September 30, 1 of the 2 states had examined as many as 378 such licensed companies, and the other state had examined 29. Some of the states of domicile for these companies, as well as the Year 2000 Industry Preparedness Task Force, ultimately ended up relying on many of the targeted Year 2000 examinations conducted by the two licensing states to verify their readiness. Information gathered by NAIC’s Year 2000 Industry Preparedness Task Force and responses to our survey of 50 states on U.S. insurers’ Year 2000 readiness indicate that regulators have considerable confidence in the insurance industry’s readiness for Year 2000. In October, NAIC estimated that 3 percent of the nation’s nationally significant insurers had not made their systems Year 2000 ready. State regulators’ responses to our survey indicated that 78 percent of all domiciled insurance companies were considered to be Year 2000 ready and making satisfactory progress in their contingency planning activities as of September 30. Of the remaining 22 percent of these companies, 17 percent, although not completed with their preparations as of September 30, were expected to become ready by December 31. Uncertainties about the status of the remaining 5 percent were largely unresolved at the time of our survey. With one exception, ratings companies and consultants we contacted were generally optimistic about the insurance industry’s Year 2000 outlook. However, some industry observers have raised questions about liability exposure issues, such as the coverage of Year 2000 remediation costs. They have also expressed concerns about insurance companies’ inability to accurately report their potential Year 2000-related liability exposures on their financial statements. In an October 1999 press release, NAIC’s Year 2000 task force reported that information obtained from its initiative focusing on nationally significant insurers indicates that the insurance industry is expected to experience little disruption when 2000 begins. The task force pointed out that state assessments of insurers’ readiness have identified a relatively small number of insurers for follow-up and continued monitoring. It also noted that regulators were expecting few problems in the new year, estimating that only 3 percent of the industry’s nationally significant companies had not made their systems Year 2000 ready. The task force chairman further stated that efforts by individual states indicated that most of the companies that were not designated as nationally significant were also on schedule, but data on the extent of validation efforts conducted for these companies had not been compiled at the time of our fieldwork in November. Our review of the 17 states previously discussed was intended to provide information on the status of regulatory oversight efforts. Separate from this effort, we conducted a survey of all 50 state insurance departments to obtain information on the Year 2000 readiness of insurance companies domiciled in each state as of September 30. Appendix IV provides the number of insurance companies by type of company identified by the 50 state respondents. For purposes of the survey, a company was to be considered Year 2000 ready if the regulator was satisfied that the company had made adequate efforts to complete Year 2000 remediation, testing, and implementation activities for all mission-critical systems in preparation for 2000. Although our survey collected data on companies’ Year 2000 contingency planning activities, we did not specify that companies should have completed such activities to be considered Year 2000 ready. This definition of Year 2000 readiness was consistent with NAIC’s industry expectation that the last 6 months of 1999 should be used by companies to focus on less critical applications and systems and develop contingency plans in the event of a failure. Individual states were to base their responses to our questions about companies’ readiness and contingency planning activities on information obtained through their Year 2000 oversight efforts. State oversight efforts pertaining to Year 2000 could include (1) surveys administered to obtain information on companies’ Year 2000 preparations, (2) required Year 2000 disclosures with financial report filings, and (3) on-site verification reviews conducted as part of the state’s regular financial examination cycle or its targeted Year 2000 examination program. State responses to survey questions on the number of Year 2000 examinations conducted in 1998 and 1999 indicated that states were engaged in varying levels of on-site verification. Table 2 shows the proportion of states’ domiciled companies that, as of September 30, had been subject to an on-site verification review of their Year 2000 readiness. Our survey indicated that state regulators had considerable confidence about the adequacy of the insurance industry’s preparation for the Year 2000 date change. Seventy-eight percent of the states’ domiciled insurance companies were considered Year 2000 ready as of September 30 (see fig. 1). State regulators also generally viewed these insurers as making satisfactory progress in their contingency planning efforts. The remaining 22 percent of the states’ domiciled insurance companies represented companies that were (1) not Year 2000 ready by September 30, but that were projected to be ready by December 31; (2) not subject to categorization due to the lack of adequate information to determine their readiness status; or (3) considered at risk of not being ready. Some uncertainties exist, specific to the companies included in the last two categories, about their ability to become fully ready by the end of the year. These categories are discussed in the following sections. State responses indicated that 17 percent of their domiciled insurance companies were not Year 2000 ready by September 30 but were projected to be ready by December 31. These companies missed NAIC’s milestone calling for all mission-critical systems to be Year 2000 ready by June 30, 1999. States estimated that on average, 84 percent of the companies progressing toward becoming Year 2000 ready by December 31 were considered to be making satisfactory progress in their contingency planning efforts, which suggests that the remaining companies were not making adequate progress. This lack of adequate progress is of particular concern for companies that may be fully preoccupied with remediating their mission-critical systems during the last quarter of the year, leaving them little time to attend to their contingency plans. These companies would also have an increased likelihood of a system failure if any of their compliant mission-critical systems happen to be integrated with less critical systems that have not been fully remediated. Viable contingency plans are especially important for larger companies that may have complex systems that were not projected to be Year 2000 ready until the end of the year. Survey responses indicated that while 887, or 83 percent, of the companies not ready by September 30, but projected to be Year 2000 ready by December 31, were small, the remaining 188 companies each wrote $100 million or more in net premiums nationwide, as shown in table 3. The states indicated that as of September 30 they lacked sufficient information to determine the Year 2000 readiness of 4 percent of their domiciled insurance companies. Specifically, 14 states placed 235 companies in this category. Many of these states indicated that they had some self-reported information from these companies, such as responses to the state’s Year 2000 surveys or the company’s Management Discussion and Analysis Year 2000 disclosures. However, these states believed that they could not determine their readiness from information that had not been corroborated by an on-site verification review. One state official, for example, explained that although survey responses did not indicate any reason to question the prospective readiness of these companies, on-site examinations had not been performed to verify survey information or determine the companies’ readiness. On the basis of similar reasoning, an official from another state noted that he was waiting for the results of ongoing examinations at some companies before reaching any conclusions about their readiness. One situation involved a state where the insurance department was responsible for monitoring the annual statements submitted by HMOs but depended on another state department for survey and examination information. The insurance department was told that a Year 2000 survey had been administered to the HMOs, but their responses had not yet been received. Therefore, the insurance department identified these HMOs as companies for which it did not have adequate information to determine their Year 2000 readiness. In response to our survey, the states reported that about 1 percent of their domiciled insurance companies were viewed as at risk of not being Year 2000 ready by the end of the year. Specifically, 13 states identified 49 companies, consisting mostly of small property/casualty insurers, that they considered at risk of not being ready by the end of the year, as shown in table 4. Thirty-eight of the companies considered at risk were small, 9 were medium, and the remaining 2 were large HMOs. On average, states estimated that 57 percent of these at risk companies had not made satisfactory progress in their contingency planning efforts as of September 30, 1999. Virtually all of the states that identified companies as at risk of not being Year 2000 ready by the end of the year noted that they would continue to focus on the adequacy of these companies’ contingency plans during the last quarter of 1999. Other actions the states planned to take to deal with at risk companies included conducting management conferences and requiring monthly Year 2000 progress reports. In a few isolated cases, a state had resorted to or was planning to resort to enforcement actions. In reviewing state survey responses regarding the Year 2000 readiness of companies by type, we found that states identified a slightly lower level proportion of HMOs considered to be Year 2000 ready when compared to insurers in other categories. As of September 30, 1999, 69 percent of the states’ companies classified as HMOs, including managed care organizations, were considered Year 2000 ready. Of the remaining 31 percent, 23 percent had mission-critical systems that were not Year 2000 ready as of September 30 but that were projected to be ready by December 31, 6 percent were not subject to categorization due to the lack of adequate information, and 2 percent were considered at risk of not being ready by December 31. In contrast, 72 to 80 percent of insurers in the property/casualty, life/health, and other insurer categories were considered Year 2000 ready. Appendix V provides a graphic that compares the readiness status of companies by type of insurer. According to a task force official, health insurers represent one area that remains vulnerable because such insurers depend on hospitals and doctors’ offices becoming Year 2000 ready. A recent report issued by the President’s Council on Year 2000 Conversion states that many health care providers and managed care organizations continue to exhibit troubling levels of readiness. The report refers to July/August survey data indicating that (1) only 40 percent of the health care providers and organizations reported that they were Year 2000 ready and (2) roughly 25 percent of the organizations did not have documented Year 2000 plans. In July 1999, we reported that many surveys had been completed in 1999 on the Year 2000 readiness of health care providers, but none provided sufficient information with which to assess the Year 2000 status of the health-care-provider community. We later testified in September 1999 that the Health Care Financing Administration, with assistance from a contractor, performed a Year 2000 risk assessment of 425 managed care organizations. This June 1999 risk assessment identified 22 percent of the organizations as being high risk, 74 percent as medium risk, and 17 percent as low risk. During our fieldwork in November, we were told that NAIC was in contact with the Health Care Financing Administration to determine whether any of the managed care organizations assessed by the agency overlapped with those that were regulated by the state insurance departments. The industry observers we contacted generally maintained a favorable view of the insurance industry’s Year 2000 preparedness efforts, but they continued to express uncertainty over potential costs associated with Year 2000-related liability exposures. With one exception, rating companies and consultants with whom we spoke have remained confident about the industry’s efforts to prepare and become ready for 2000. For instance, the Gartner Group continues to place the insurance industry among the industry leaders in becoming Year 2000 ready, on the basis of its August report. Likewise, several rating companies we contacted, including Standard and Poor’s; A.M. Best; and Moody’s, indicated that, as of October, they had not downgraded any insurer’s rating due to Year 2000 readiness issues. One rating firm, Weiss Ratings, Inc., tempered the generally optimistic view of readiness in the industry by reporting that 13 percent of the companies responding to its survey had made inadequate progress in their Year 2000 preparations. The response rate to this June 1999 survey was about 19 percent. While Weiss Ratings, Inc., considered a company’s progress to be inadequate if its mission-critical systems were not renovated and tested by August 1999, it also indicated that those companies viewed to be making inadequate progress still had a good chance of achieving Year 2000 compliance in the time remaining. Considerable uncertainty remains concerning the potential magnitude of insurers’ Year 2000-related liability exposures. In our April report, we noted that insurers’ liability exposures could not then be reasonably estimated because, among other factors, a claims history for the event did not exist and questions about key legal issues that could affect insurance policy coverage were still unresolved. Since then, the industry observers we contacted have continued to express uncertainties. The rating companies we contacted in October indicated that it was still too early to tell how liability exposures might affect insurance companies. For this reason, the rating companies had not factored liability exposures into their ratings. One actuarial consulting firm estimated that costs from Year 2000-related claims and legal expenses among U.S. property/casualty insurers could range between $15 billion and $35 billion. The firm acknowledged that its estimates were based on several assumptions associated with claims and legal outcomes that have not yet been realized. The industry observers we contacted generally said that insurance companies will not likely be in a position to report their potential Year 2000-related liability exposures on their 1999 financial statements, because their liability exposures are not yet reasonably estimable due to uncertainties over claims and legal outcomes. Legal debates over insurance coverage for Year 2000-related mishaps, as well as for costs to avoid such mishaps, have yet to be fully resolved. Our previous report described some of the legal debates associated with coverage for Year 2000-related problems and damages, including the coverage-related issues of “fortuity” and “triggers.” For example, it is not clear whether a Year 2000-related loss would be considered a fortuitous event covered by insurance, rather than an expected event that may not be covered. For some types of policies, coverage also depends on what event triggers coverage. Specifically, questions arise when coverage was activated for a particular policy. Other debates focus on whether an insured party that takes remedial action to reduce its vulnerability to Year 2000-related problems may recover remediation costs under a particular policy. The Y2K Act, enacted in July 1999, does not directly address many of the unresolved legal issues that could affect insurers’ potential liability exposures. The primary purposes of the act include facilitating alternative modes of dispute resolution, limiting certain liabilities for Year 2000- related claims, and providing pre-Year 2000 remedial measures aimed at reducing an insured’s vulnerability to Year 2000-related mishaps. The act also sets forth procedural requirements for class action suits and affirmative defenses for temporary noncompliance with certain federal standards caused by a Year 2000-related problem. The act, however, does not contain substantive standards to guide courts in deciding whether Year 2000-related mishaps or remediation costs should be covered. Such issues will principally be a matter of state law. Some current cases involving insurance coverage disputes and additional provisions of the Y2K Act are described in appendix VI. Since we first expressed concerns about the states’ regulatory oversight of the insurance industry in March 1999, NAIC actively emphasized the value of state regulators’ validating insurance companies’ Year 2000 preparations through on-site examinations, particularly those undertaken by nationally significant insurers. We also found that, partly in response to this emphasis, some states have increased their efforts to conduct on-site examinations of their domiciled insurance companies’ Year 2000 preparations. Even with this increased emphasis, however, not all of the nation’s insurance companies will be subject to an on-site verification review. Gaps in Year 2000 verification coverage of the insurance industry can be attributed to several factors. One factor has been the late start of some states in conducting on-site verification reviews and the resulting need for them to focus on the potentially higher impact companies rather than conducting examinations of all domiciled insurance companies. Another factor has been regulators’ belief that, in some cases, comprehensive surveys were acceptable substitutes for examinations. Differing regulatory perspectives have constituted yet another factor contributing to state decisions to forgo on-site examinations for some companies. Such perspectives ranged from satisfaction with the adequacy of off-site monitoring in a few states that had closely tracked the progress of their companies over the last few years to the view in a few other states that Year 2000 readiness would likely be adequately covered by insurers motivated to remain competitive without regulatory prodding. Finally, insurance regulators have indicated that some small insurance companies are not sufficiently dependent on computers to experience major problems with the Year 2000 date change. The strategy promulgated by NAIC to focus on nationally significant companies appears reasonable given the large number of state-regulated insurance companies subject to oversight and the limited time remaining before 2000. However, states’ generally late start in assuming a more proactive role regarding Year 2000 has affected their ability to complete their regulatory oversight of all the insurers they supervise. As of mid- November, for example, some states had not finished the planned on-site validation process for their companies, and NAIC was still in the process of collecting information about states’ regulatory assessments of the nation’s largest managed care organizations and major health insurers not designated as nationally significant. In addition, uncertainties exist about the readiness outcome for the 4 percent of the companies for which regulators did not have sufficient information, and 1 percent of the companies that regulators viewed at risk of not being ready by December 31 but were closely monitoring. Although they were projected to be ready by the end of the year, questions also remain unresolved regarding whether all of the 17 percent of the insurance companies that were not ready as of September 30, 1999, can complete all conversion activities to become fully compliant within the remaining time. Lastly, and arguably outside of the regulators’ control, uncertainties regarding insurers’ Year 2000 liability exposures continue to represent an area of concern that is being monitored by rating companies, and other industry observers, that can affect the overall Year 2000 outlook of the insurance industry. In summary, the intensive regulatory activity of the past several months provides additional support for the level of confidence that regulators place on the insurance industry’s Year 2000 preparations and their belief that most policyholders should not be concerned about their coverage. However, as previously indicated, remaining gaps in the on-site verification of insurance companies’ Year 2000 readiness and unfinished regulatory efforts in the area leave uncertainties about the self-reported status of some companies’ readiness. As a point of comparison, banking regulators, who have conducted multiple examinations of all their financial institutions, can provide stronger assurances to support their assertions that, with relatively few exceptions, all banks were Year 2000 ready as of September 30, 1999. Insurance regulators, on the other hand, can say with some conviction that most insurance is sold by companies that are Year 2000 ready or appear to be on course to become ready by the end of the year. It remains true that a portion of the industry had not completed its Year 2000 preparations by September 30, 1999, and that some of these companies had not made satisfactory progress in contingency planning. However, the welcome news is that most consumers, especially those insured by nationally significant companies, can have greater confidence that their insurers will likely provide uninterrupted services into the new year. NAIC provided written comments on a draft of this report. A reprint of NAIC’s letter can be found in appendix III. NAIC disagrees with a perceived assertion that because states have not performed on-site verification of Year 2000 preparations of every insurance company, it represents an inability of states to complete their regulatory oversight of the industry. As noted on page 22, we attribute the inability of states to complete their regulatory oversight of the industry to states’ generally late start in assuming a more proactive role regarding year 2000. This late start, among other factors, has caused some states to forgo conducting Year 2000 on-site verifications for some of their domiciled companies which has, in turn, limited the level of assurances to support regulatory assertions of their companies’ readiness. We acknowledge on page 10 that, according to NAIC information, 98 percent of the direct premiums written by nationally significant companies had been or were to be subject to an on-site Year 2000 review by the end of November. However, the extent of on-site validation for the life/health and property/casualty companies that were not nationally significant and the other insurer segments, which together represent 27 percent of the total direct premiums written by the industry as a whole, was unknown at the time of our review. We continue to believe that most consumers, especially those insured by nationally significant companies, can have greater confidence that their insurer will likely provide uninterrupted services into the new year. The same level of assurances, however, cannot be provided for the portion of the industry that may not have been subject to an on-site verification or for which the extent of on-site verification is unknown. NAIC believes that the draft report erroneously overemphasized statistics based on the number of insurance companies verified or the number of states performing on-site examinations and suggests that these statistics should focus on information supplied by the NAIC and, more emphatically, on premium-based statistics from our survey of the 50 states. Premium- based information supplied by the NAIC can be found throughout this report, but specifically on pages 2, 6, and 10 as it relates to the on-site verification of nationally significant companies. We did not provide premium-based statistics from our survey primarily because we found inconsistencies when we compared responses from many of the states to similar information provided by NAIC. For example, the total net premium volume written nationwide reported by 15 states to have been subject to an on-site verification was, on average, 126 percent more than the total net premium volume written nationwide identified by NAIC for each of these states. Such inconsistencies may have been the result of the states reporting gross premiums rather than net premiums as our survey requested or the result of duplicative counting for companies that may have been subject to Year 2000 on-site verifications conducted during both a regular and a targeted examination. Regardless of the reasons for such inconsistencies, they rendered the survey responses on net premiums subject to on-site verification unusable for reporting purposes. NAIC also indicated that statistics based on the number of companies impart an unnecessary negative bias because a significant number of companies either write a very small amount of premiums, are so small as to have no risk of Year 2000 failure, are dormant companies, or are companies that have been acquired by or merged into other insurers since 1998. To help minimize this type of bias, we have made an adjustment to the table on page 15 that shows the percentage of states’ domiciled companies that were subject to on-site verification examinations. Specifically, companies whose Year 2000 readiness status was not viewed by the states as relevant were excluded; this covered, for example, companies in liquidation, companies operating without computer systems, and shell companies with no business. The effect of this adjustment on the number of states falling into each category of on-site verifications conducted was minor. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from its date. At that time, we will provide copies to Representative Thomas Bliley, Chairman, House Committee on Commerce, and Senator Robert Bennett, Chairman, and Senator Christopher Dodd, Vice Chairman, Senate Special Committee on the Year 2000 Technology Problem. We will also provide copies of this report to other interested parties and will make copies available to others on request. Key contributors to this assignment are acknowledged in appendix VII. Please call me or Lawrence Cluff on (202) 512-8678 if you or your staff have any questions. We focused part of our review on the same 17 state insurance departments visited during our previous review. These state departments’ domiciled insurance companies collectively accounted for almost 76 percent of insurance sold nationally during 1998. The departments represented the top 12 states, whose domiciled companies had combined market shares ranging from 3.4 percent to 14 percent, and 5 states with relatively smaller market shares ranging from 0.3 to 2.1 percent. As previously noted, our April report described some of the prevalent issues involved in coverage disputes. Some recent court cases raised a new issue involving businesses seeking to recover from their insurers remediation costs they have incurred in their efforts to help prevent or reduce the costs of Year 2000-related mishaps. On the basis of an interpretation of a provision commonly referred to as the “sue and labor” clause, in some of the cases, the insured entities claimed that their insurance policies cover remediation costs. Some insurance policies, generally property/casualty insurance policies, contain a sue and labor provision accompanied by language specifically obligating the insurer to contribute to the expenses incurred by the insured in acting under the provision. The sue and labor clause originated centuries ago in ocean marine policies. Its purpose was to encourage or require policyholders to prevent or minimize imminent potential loss or damage covered by the policy without forfeiting recovery under the policy, thereby reducing the insured loss. According to one commentator, the classic example is the captain who orders the crew to jettison cargo to prevent the ship from foundering in stormy seas. The value of the jettisoned cargo is recoverable under the sue and labor clause. At least three cases were brought in 1999 seeking coverage of remediation costs under sue and labor provisions. The insured plaintiffs in those cases reportedly sought to recover remediation costs of at least $400 million and $183 million. Cases of this type contribute to the uncertainties associated with insurers’ potential liability exposure. Insurers opposing claims to recover remediation costs have raised several arguments. For example, they contend that remediation costs are covered only if they were incurred to protect against an insured loss, and that Year 2000 remediation costs are not insured losses. Among other things, costs recoverable pursuant to a sue and labor provision typically involve remedial measures to prevent or recover damage arising from covered events, such as lightning, fire, or theft. Insurers contend that Year 2000 remediation costs arise from a defect or inherent limitation in a product and not in connection with an insured event. In addition, insurers assert that the majority of losses an insured business seeks to protect against through remediation measures are not losses attributable to the physical loss or damage of insured property, but instead are uninsured economic losses, such as a decrease in market share, a loss of investor or consumer confidence, or regulatory sanctions. Another argument is that the loss to be minimized or avoided must be actual or imminent. According to this argument, the Year 2000 event should not be considered imminent, because insured entities have been aware of it for several years and in many cases began remedial measures as early as the mid-1990s. Insurers argue that such remediation costs should be considered ordinary costs of doing business. The Y2K Act does not create any new causes of action. Instead, for “Year 2000 actions,” it modifies existing state or federal procedures and remedies concerning nonpersonal injury liability arising from Year 2000 failures.Among other things, the act (1) requires that notice of a Year 2000 claim be given to potential defendants before a “Year 2000 action” is filed; (2) establishes heightened pleading requirements; (3) sets caps and limitations on punitive damage awards; and (4) provides for the apportionment of damages, rather than joint and several liability, except in cases where it is found that the defendants acted with specific intent to injure the plaintiff or knowingly committed fraud. The act applies to any “Year 2000 action” brought after January 1, 1999, for an actual or potential “Year 2000 failure” occurring before January 1, 2003. The following is an overview of some provisions of the act that could have a direct or indirect impact on the amounts for which insurance companies may be liable. Most of the Y2K Act focuses on the litigation process. Under the notice provisions, prospective plaintiffs in a Year 2000 action (except for claims for injunctive relief) must send a written notice to each prospective defendant containing information about the pertinent event and including the remedy sought. Within 30 days after receiving the notice, the prospective defendant must provide each plaintiff with a written statement describing what, if any, remediation measures or alternative dispute resolution processes would be acceptable. If the defendant proposes a plan to remediate the problem, the prospective plaintiff must allow the prospective defendant an additional 60 days from the end of the 30-day notice period to complete the proposed remedial action before bringing suit. The purpose of this 90-day notice requirement is to create a procedure that might facilitate the parties’ resolution of the problem through voluntary efforts or through alternative dispute resolution. The Y2K Act also contains rules for pleading affirmative defenses, damages, warranty and liability disclaimers, proportionate liability, and class actions. One purpose of the pleading requirements is to reduce the potential for frivolous claims by requiring the plaintiff in a Year 2000 action to articulate certain bases for the claim and remedy. Provisions of the act for preserving warranties and contracts also are intended to discourage frivolous lawsuits. As previously discussed, the act generally requires that all written contract terms, including exclusions of liability and disclaimers of warranty, are to be strictly enforced unless those terms are contrary to any applicable state statute in effect as of January 1, 1999. The Y2K Act limits liability exposure and damages. Some limitations depend upon whether the lawsuit is a contract action or a tort action. For example, in tort actions, the act provides for proportionate liability, except with respect to certain suits brought by consumers. Generally, defendants will be liable only for that portion of a judgment that corresponds to their proportionate share of the total fault for the plaintiff’s loss, unless the defendants are found to have committed fraud in connection with the Year 2000 problem or to have specifically intended to injure the plaintiff. Such a finding would render the defendant jointly and severally liable. Other limitations include the following: (1) elimination of strict liability, (2) heightened proof requirements as a condition for recovering punitive damages and a cap on the amount of such damages for individuals with net worth of less than $500,000 and small employers, and (3) limitations on the recovery of certain “economic losses” of the plaintiff alleged in connection with a tort claim. These losses, which include lost profits, business interruption losses, and consequential and indirect damages, may be recovered only if a contract provides for their recovery or the losses result directly from damage to tangible real or personal property caused by the Year 2000 failure. The limitation does not apply to claims of intentional torts. For contract actions, no category of damages may be awarded unless such damages are allowed by the contract expressly or, if the contract is silent on the matter, under applicable state or federal law. The act contains a special provision that excludes from damages the amount the plaintiff reasonably could have avoided by utilizing any available or reasonably ascertainable information “concerning means of remedying or avoiding the Year 2000 failure involved in the action.” Prospective plaintiffs are provided with an incentive to take reasonable steps to limit their damages. This duty to mitigate is in addition to any such duty imposed under state law. The duty is not absolute, however. Where a defendant intentionally misrepresents facts concerning the potential for a Year 2000 failure in the “device or system used or sold by the defendant” that caused plaintiff’s harm, the plaintiff will be relieved from this statutory mitigation duty. Depending upon the effects of the settlement incentives and litigation- related limitations contained in the Y2K Act, insurance liability exposure could be less than would be the case if liabilities for Year 2000 actions were determined under less limiting state laws. Whether an insurance policy covers a particular Year 2000 event, however, could depend upon a number of factors, including the extent to which state laws and cases apply to insurance coverage litigation. In addition to the persons named above, Evelyn E Aquino, Gerhard Brostrom, Barry A. Kirby, May M. Lee, Alexandra Martin-Arseneau,and Paul G. Thompson made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the readiness of the insurance industry to meet the year 2000 date change, focusing on: (1) an updated assessment, as of September 30, 1999, of state regulatory oversight of the insurance industry's year 2000 preparations; and (2) the status of the industry's year 2000 readiness. GAO noted that: (1) since GAO's last report, the National Association of Insurance Commissioners (NAIC) stepped up its efforts to assess the insurance industry's year 2000 readiness by: (a) issuing expanded guidance to state insurance regulators on how to examine companies' preparedness; and (b) encouraging state regulators to conduct on-site examinations of insurers with the greatest potential public impact; (2) some of the nation's state regulators increased their use of examinations aimed at verifying the year 2000 readiness of their insurers, particularly for their nationally significant life/health and property/casualty insurers; (3) six of the 17 states reviewed indicated that their goal was to conduct year 2000 readiness examinations for all of the insurance companies domiciled in their states; (4) the remaining 11 states had set varying goals regarding which companies were to be subject to year 2000 examinations, but most of these states attempted to cover their nationally significant insurers; (5) in October 1999, NAIC's Year 2000 Industry Preparedness Task Force reported the insurance industry expected to experience little disruption when 2000 begins; (6) state responses to a nationwide survey GAO conducted indicated considerable confidence in the insurance industry's preparation for the year 2000 date change; (7) uncertainties about the ability of the remaining 5 percent of the companies to be year 2000 ready were largely unresolved at time of survey; (8) regulators indicated they did not have adequate information to determine the readiness status for 4 percent of the companies and considered 1 percent to be at risk of not being ready by December; (9) states appeared to have a slightly lower level of confidence in the readiness of health maintenance organizations and managed care organizations than those in other insurance segments; (10) according to a task force official, health insurers represent one part of the industry that remains vulnerable because they depend on hospitals and doctors' offices becoming year 2000 ready; (11) industry observers continued to express uncertainty over potential costs associated with year 2000-related liability exposures; (12) legal debates had yet to be resolved over insurance coverage for year 2000-related mishaps as well as liability for costs that policyholders incur to avoid such mishaps; and (13) rating companies indicated that it was still too early to tell how liability exposures might affect insurance companies, and for this reason, the rating companies had not factored these exposures into their ratings.
Our assessment of IRS’s 2004 filing season performance was based on analyses of IRS data and information obtained from sources outside IRS, interviews with IRS officials and private sector tax practitioners, observations of IRS operations, and, for the comparison to previous years, on our past filing season reports. reviewed and analyzed IRS reports, testimonies, budget submissions, other documents and data, including workload data and data from IRS’s current suite of balanced performance measures, which we used to assess performance this year; interviewed IRS officials about current operations, performance relative to 2004 goals and prior filing seasons, and significant factors and initiatives that affected performance; interviewed representatives of large private and non-profit organizations that prepare tax returns and trade organizations that represent both individual practitioners and tax preparation companies; reviewed related TIGTA reports and interviewed TIGTA officials; followed up on GAO recommendations made in prior filing season and tested for statistical differences between yearly changes for various IRS analyzed information posted to IRS’s Internet Web site based on GAO’s knowledge of the type of information taxpayers look for, and assessed the ease of finding information, as well as the accuracy and currency of data on the site; reviewed information from companies that evaluated Internet performance and assessed various aspects of IRS's Web site; and reviewed staffing data for paper and electronic processing, telephone assistance, and walk-in assistance. This report discusses filing season performance measures and data covering the quality, accessibility, and timeliness of IRS’s services. We have previously reported that some of the performance measures IRS uses to assess aspects of its filing season performance had attributes of successful measures, including objectivity and reliability, although in some cases, the measures could be further refined. Since that report, IRS has made refinements in some measures. We also reviewed IRS documentation, interviewed IRS officials about computer systems and data limitations, and compared those results to GAO standards of data reliability. As a result, we determined that the IRS data we are reporting are sufficiently reliable for assessing IRS’s filing season performance. Data limitations are discussed where appropriate. We conducted our work at IRS headquarters in Washington, D.C.; the Small Business/Self-Employed Division headquarters in New Carrollton, Maryland; the Wage and Investment Division headquarters, the Joint Operations Center (which manages telephone service), and a telephone call site in Atlanta, Georgia; and walk-in and volunteer locations in Georgia, Maryland, and Virginia. We selected these offices for a variety of reasons, including the location of key IRS managers, such as those responsible for telephone and walk-in and volunteer services. We performed our work from January through October 2004 in accordance with generally accepted government auditing standards. IRS’s filing season is an enormous and critical undertaking that includes two key activities—returns processing and taxpayer assistance—and which consumes thousands of staff years annually. Processing of paper returns is labor intensive and error-prone. IRS employees manually transcribe paper tax return information into IRS’s computer systems, which can introduce errors. Electronic filing allows taxpayers to receive refunds faster, and processing is less labor-intensive and error prone than for paper returns. IRS does not have to transcribe electronic tax return information and built-in checks eliminate many errors that IRS has to deal with when processing paper tax returns, such as computational mistakes and incorrect social security numbers. The rate for this type of error on electronic tax returns was almost 4 percent compared to almost 25 percent on paper tax returns, as of July 9, 2004. To help taxpayers comply with their tax obligations, IRS provides various services at its call sites, walk-in sites, and on its Web site. Figure 1 shows how toll-free telephone calls from taxpayers typically are routed through IRS’s telephone system and answered by customer service representatives (CSRs) or by automated services. At IRS’s approximately 400 walk-in sites taxpayers ask tax law questions, get account information, receive assistance with their accounts, and have returns prepared (if annual gross income is $35,000 or less). In addition, low-income and elderly taxpayers get tax returns prepared at over 13,500 volunteer sites run by community-based coalitions that partner with IRS. IRS awards grants, trains and certifies volunteers, and provides reference materials, computer software and, in some cases, computers to these volunteer organizations. IRS’s Web site is important because it allows taxpayers to instantly download hundreds of tax forms and publications, access current information on tax issues and electronic filing, and ask IRS tax law and procedural questions. Since passage of the IRS Restructuring and Reform Act of 1998 (RRA 98), IRS has been focused on improving filing season services. In 2001, IRS established a suite of balanced performance measures. The system emphasizes accountability for achieving specific results and reflects IRS’s priorities, including providing quality service to each taxpayer in every interaction. As part of its strategic planning and budgetary processes, IRS establishes performance goals each fiscal year and uses them to hold managers and frontline staff more accountable for improving filing season performance. IRS processed individual income tax returns and issued refunds smoothly in 2004. IRS nearly met or exceeded many of its 2004 performance goals, with performance generally improving since 2001. However, despite continued growth this year and despite various initiatives to encourage electronic filing, IRS is not on track to achieve its long-term goal of having 80 percent of all individual tax returns filed electronically by 2007. As of September 17, 2004, IRS had processed about 128 million individual tax returns, including for 67 million returns filed on paper, with no significant disruptions and issued nearly 100 million refunds within specified tolerances. According to IRS data, IRS nearly met or exceeded seven out of its eight processing performance goals in 2004. Similarly, 2004 performance nearly met or exceeded 2003 performance for six of the seven comparable measures. Appendix 1 provides details. Furthermore, as Appendix 1 and the following examples show, IRS has generally improved its processing operations over a longer period. The percent of notices with errors has declined since 2002 (for notices sent to taxpayers about possible simple mistakes on their returns). In 2002, 18.7 percent of the notices were issued with errors compared to 9.4 percent as of July 31, 2004 (the most current data available). The refund error rate, the percentage of refunds with IRS-caused errors (e.g., incorrect name or Social Security number), has decreased from 9.8 percent in 2001 to 5.3 percent in 2003, to 4.9 percent as of July 31, 2004 (the most current data available). Tax practitioners, who last year prepared approximately 62 percent of all individual income tax returns, agreed that the processing of returns in the 2004 filing season has gone smoothly. Representatives from the National Association of Enrolled Agents, American Institute of Certified Public Accountants, and other tax related organizations had positive comments on IRS’s 2004 filing season and processing. Similarly, TIGTA officials told us that IRS generally processed returns smoothly in 2004. IRS officials attributed this year’s performance in part to their planning for tax law changes, such as the advance child tax credit and the increase in electronic filing. The number of individual income tax returns that IRS received electronically continued to grow, and IRS exceeded its 2004 goals for the number and percentage of tax returns to be filed electronically. From January 16 through September 17, 2004, it had received an estimated 61.1 million individual tax returns electronically or 47 percent of all returns filed to date. Also, the growth rate of 15.8 percent is greater than IRS’s projected growth rate of 13 percent for this year. Figure 2 shows that growth since 1996. According to IRS officials, the primary reason for the greater than expected growth rate is that five states mandated electronic filing of state tax returns prepared by qualified tax practitioners for 2004. According to these same officials, these mandates led to significantly more electronic filing of federal tax returns in these states because tax practitioners converted their entire practices to electronic filing. For example, for California and Michigan, the largest of the five states, the number of tax returns filed electronically increased from 4.7 million and 1.9 million in 2003 to 7.1 million and 2.6 million respectively as of May 2004. The current rate of growth of electronic filing, however, will not allow IRS to achieve its long-term electronic filing goal of 80 percent of all returns by 2007, provided by RRA 98. Assuming a continuation of the current growth rates of 15.8 percent for individual returns filed electronically and .23 percent for the total number of individual tax returns filed, IRS would receive 73 percent of individual tax returns electronically by 2007. However, neither IRS nor the Electronic Tax Administration Advisory Committee (ETAAC) expects IRS to maintain this year’s growth rate. In fact, IRS is predicting declining growth rates of about 11.5 percent, 9.9 percent, and 8.1 percent in 2005, 2006 and 2007, respectively. In its June 30, 2003, report to Congress, ETAAC concurred with IRS’s prediction of lower annual growth rates. IRS officials stated that achieving the 80 percent goal would require taxpayers and tax practitioners who prepared 39 million individual income tax returns on a computer but filed them on paper would instead have to file them electronically, or legislation would have to be proposed and passed that mandates electronic filing by tax practitioners. Electronic filing is important because, according to IRS, it costs less to process electronic tax returns than paper tax returns. IRS estimates it saves $2.15 on every individual tax return that is processed electronically. However, we cannot independently verify this estimate and its basis is unclear, because as we have reported, IRS does not have a cost accounting system to support preparation of such cost estimates. Electronic filing has allowed IRS to close paper processing centers, devote less staff to the processing of tax returns, and control processing costs by shifting resources from labor-intensive paper return processing to other areas, such as compliance. For example, with the elimination of paper tax return processing at the Brookhaven Submission Processing Center, IRS used about 1,000 fewer staff years to process paper returns in 2003 than in 2002, and plans additional staff-year savings when paper tax returns processing at the Memphis Submission Processing Center is eliminated in 2005. (See app. 2 for more information on staff years). Because increasing electronic filing is so important, IRS officials do not want to reduce the 2007 goal, even though IRS projects that it is not achievable. Retaining this goal serves as a symbol of their determination to take actions to convert taxpayers to file electronically. Over the years, IRS has taken numerous actions to encourage electronic filing, including making electronic filing free to most taxpayers via the Free File Alliance Program, a program that began last year; surveying taxpayers and tax practitioners in response to a recommendation in our 2001 filing season report to determine why 40 million tax returns were prepared on a computer but filed on paper; making over 99 percent of all individual tax forms suitable for electronic making the process totally paperless if a person uses a personal identification number to sign the tax return. For the 2004 filing season, IRS took the following actions to encourage taxpayers and practitioners—primarily those who prepared returns on a computer, but filed them on paper—to file electronically IRS. IRS improved the Free File Alliance Program (as of September 17, 2004, about 3.5 million individual tax returns have been filed electronically via the Free File Alliance, compared to 2.8 million for all of last year, a 26 percent increase); contacted about 12,200 tax practitioners who prepared business returns on computers, but filed electronically to advise them about the benefits of electronic filing; targeted the approximately 8 million individual taxpayers who filed their own paper returns prepared on a computer by mailing them a modified version of Publication 8160E that cites the advantages of electronic filing; spent $11.2 million marketing electronic filing; and made six more forms available for electronic filing. Other major electronic filing initiatives could lead to more individual electronically filed tax returns starting in 2005. However, IRS did not expect the following initiatives to dramatically increase electronic filing in 2004 because taxpayers and practitioners will need time to adjust their behavior. The modernized E-file program, which for the first time, allows electronic filing of corporate tax returns, could lead to more individual tax returns being filed electronically. According to IRS officials, some tax practitioners reported they would not file electronically until they could do so for both individual and corporate tax returns, saying it did not make business sense to file tax returns two different ways. The E-services program, offered to tax practitioners who have filed at least 100 electronic tax returns, gives them the ability to conduct business, such as electronic account resolution and transcript delivery with IRS electronically 24 hours a day 7 days a week for services. Despite these initiatives, IRS does not expect to reach its 2007 goals for electronic filing. However, because of the potential for cost savings, we continue to see value in such initiatives. IRS exceeded its 2004 telephone service goal for access to customer service representatives (CSRs) and has improved in this area since 2001. However, the accuracy of CSR answers to tax law questions declined. IRS initiated two pilots in 2004 to help assess options for improving its telephone service. IRS received 84 million calls on its toll-free telephone lines in 2004 through mid-July. Figure 3 shows that almost half of those calls were from callers trying to obtain information on the status of their tax refunds; the rest were primarily account or tax law questions. Figure 4 shows how the calls were handled. IRS’s automated service handled 30 million calls and CSRs handled 24 million. The rest of the calls came in after business hours, were transferred, were disconnected, or the caller hung up before receiving service. As table 1 shows, compared to last year, access to CSRs continued to improve, the average time taxpayers waited for CSRs remained stable, and the accuracy of CSR responses to account questions remained stable, while the accuracy of CSR responses to tax law questions declined. Unlike calls to CSRs, IRS does not have a quality measure for its automated telephone services. In our report on IRS’s performance measures, we recommended that IRS develop a customer satisfaction survey to measure approval of the automated service. At the time, the IRS Commissioner responded that he agreed that measuring customer satisfaction for automated service was important. According to IRS officials, the needed computer programming changes were not done for the 2004 filing season, but are in the queue to be done when programming resources permit. IRS officials continue to attribute the decline in tax law accuracy rate primarily to changes made to the CSR’s Probe and Response (P&R) Guide, a publication that CSRs use to help them answer taxpayers’ tax law questions. Last year, we reported that IRS attributed the decline in the tax law accuracy rate on a new format for the P&R guide, among other factors. At that time, an IRS official believed that after CSRs became familiar with the guide, the problem would be resolved and IRS continued using the new format. IRS began to address the problems with the P&R guide during the 2004 filing season. For example, CSRs told us that they attended a meeting of managers and CSRs in March 2004 to identify the problems with the guide and develop an action plan to correct the problems. According to IRS officials, IRS tested changes to the guide at the St. Louis, Missouri and Cleveland, Ohio call sites in June 2004. In addition, IRS has a written plan with deadlines for testing the guide for the 2005 tax filing season. The new guide was to be made available in hard-copy by October 1, 2004 and was to be used for purposes of training employees before the start of the 2005 filing season. IRS initiated two pilot programs in 2004 to assess options for improvements on its toll-free telephone services. IRS piloted having a contractor answer tax law questions instead of IRS employees to determine (1) if a contractor could deliver an equal or superior level of service and (2) the public’s perception regarding being assisted by someone other than IRS employees. IRS routed 10 percent of the tax law calls received on its toll-free lines to a contractor for 60 days (February through April 2004). With respect to the accuracy of tax law responses provided to taxpayers, the contractor’s performance was about half that of IRS’s—44.6 versus 82.46 percent, respectively. IRS attributed the contractor’s performance to a longer-than-expected learning curve. Also, taxpayers’ raised concerns regarding privacy, although no confidential information is shared in answering tax law questions. In September 2004, IRS officials decided not to go forward with further testing. The second pilot was for contact recording, which involves recording all telephone contacts between taxpayers and CSRs and, for 10 percent of the calls, also capturing computer screen displays accessed by the CSR. It is intended to enable supervisors to provide CSRs with more complete feedback on their performance. Figure 5 illustrates the process for contact recording. IRS conducted the contact recording pilot from January through April 2004 at three call sites. According to IRS officials and our interviews with CSRs, CSRs liked being able to hear and see how they handled calls. In September 2004, IRS officials told us that they had decided to implement contact recording at all call sites by the end of the 2005 filing season. The total number of taxpayers visiting IRS walk-in sites continued to decrease while those having their returns prepared at volunteer sites increased. Available data raise questions, however, about the quality of services provided at both walk-in and volunteer sites. IRS has initiatives under way intended to improve the quality of data, though implementation may not be ready for the 2005 filing season. Based on the data obtained from IRS and shown in figure 6, the total number of taxpayers seeking assistance at IRS sites declined an average of over 8 percent per year between 2001 and 2004. IRS officials attributed the overall decrease to taxpayers’ use of more convenient means to obtain services, such as IRS’s toll-free telephone lines and Web site. Taxpayers seeking return preparation assistance at walk-in sites decreased an average of 26 percent per year between 2001 and 2004. In contrast, since 2001, the number of taxpayers seeking return preparation assistance at volunteer sites increased an average of 19 percent per year. During the 2004 filing season, taxpayers had over five times more returns prepared at volunteer sites than at IRS walk-in sites. These divergent trends reflect IRS’s strategy to shift return preparation to sites staffed by volunteer and community-based coalitions that are overseen by IRS. IRS has encouraged the shift by advertising the locations of these sites. The shift of taxpayers from walk-in to volunteer sites is important because it has transferred some time-consuming services, such as return preparation, from IRS to volunteer sites. It also enabled IRS to shift more taxpayers to its telephone and Web site services, allowing it to concentrate on services at walk-in sites that only IRS can provide, such as account assistance or compliance work (see app. 2). While shifting taxpayers from IRS walk-in sites to volunteer sites has the advantage of freeing up some IRS resources, IRS has limited performance measures and data on the quality of tax law assistance, account assistance, timeliness, and return preparation provided at either type of site. We have noted in prior filing season reports that this raises concerns about quality and oversight. Tax Law Assistance: Both TIGTA and IRS reported that IRS had not reached its goal of 80 percent accuracy at the walk-in sites they visited. Since the sites were selected judgmentally, the results cannot be projected to all sites. Account Assistance: During the 2004 filing season, IRS did not measure account assistance accuracy at its walk-in sites. According to IRS officials, the focus on tax law accuracy diverted staffing resources from gathering data on account assistance accuracy. Timeliness: IRS also did not measure timeliness, although GAO has recommended that IRS re-adopt a timeliness measure for its walk-in sites, stating that the presence of a quality measure should provide balance and a disincentive for employees to ignore quality in favor of timeliness. Return Preparation: The utility of the current measure for accuracy of return preparation assistance provided at walk-in and volunteer sites is limited, because it narrowly defines accuracy. It counts a return as accurate only if no calculation errors or other obvious factual errors, such as omitted or inconsistent data, are identified on the return. In addition, the results of recent TIGTA audits raise questions about the accuracy of return preparation assistance at both walk-in and volunteer sites. IRS has initiatives under way intended to better measure the quality of key services at walk-in and volunteer sites. Key parts of both the walk-in and volunteer site initiatives that were scheduled for implementation in 2005, have experienced delays, have important details to be determined, and may not be implemented on schedule. The initiative for walk-in sites has the following four components. Embedded Quality provides a standardized checklist for managers to measure the accuracy, professionalism, and timeliness of employee responses to taxpayers. Initially under Embedded Quality, a manager will directly observe the employee/taxpayer interaction. However, this could yield biased data, because employees will know they are being observed, which could influence their behavior. Consequently, Embedded Quality data gathered by direct observation may not be representative of true performance. Contact Recording will replace direct observation as the method for gathering data on Embedded Quality. Under contact recording, the IRS employee/taxpayer interactions will be recorded and a random sample assessed using the Embedded Quality checklist. A pilot was scheduled to begin in August 2004. According to IRS, as of September, the Department of Treasury has not yet approved the pilot. It is unlikely the pilot will begin until January 2005. Since the pilot and evaluation are planned to take 3 months, it is also unlikely that contact recording will be fully implemented until well into the 2005 filing season. Q-Matic is an automated system that keeps track of such things as customer wait-time, assistance provided, and staff time used. As of September 2004, Q-Matic implementation is proceeding on schedule. Performance-based individual learning (E-Learning) is a training component that will be used to identify and address deficiencies. Until contact recording replaces direct manager observation as a means of gathering data, IRS may have biased data on the quality of its walk-in services. Such biased data has significant limitations for drawing conclusions regarding performance and for making comparisons to other years. This in turn creates difficulties for managers trying to identify problems and make improvements to service. Furthermore, until Embedded Quality and contact recording are implemented together for a full filing season, assessing quality at walk-in sites will be based on two different data collection methods, thus limiting comparisons. At volunteer sites, IRS plans to implement the following; Quality Assurance is an initiative where IRS, working with volunteer and community-based coalitions, is developing quality standards and a means of monitoring the quality of return preparation services. This is consistent with an earlier recommendation we made for IRS to develop performance measures for volunteer sites to help ensure that taxpayers were receiving an adequate level of service. IRS planned to implement the initiative for the 2005 filing season. However, as of September 2004, IRS had missed its milestones for developing guidelines for monitoring volunteer sites, and was still developing and revising its implementation plans, revising its schedule, and determining important details, such as how and when volunteer sites will be monitored and how quality will be measured. We are concerned that the Quality Assurance initiative may not be implemented in time to measure performance for the 2005 filing season. As with IRS’s initiative for its walk-in sites, until Quality Assurance is fully implemented, IRS will continue to have limited and potentially unreliable information on the quality of return preparation at volunteer sites for the 2005 filing season. As a result, IRS officials have less information available when making decisions about the role of volunteer sites and how IRS should support them. IRS’s Web site is important because it provides taxpayers and tax practitioners with customer service without having to directly contact IRS employees. Although overall Web site usage increased over last year, we have some concerns about IRS’s performance answering tax law questions. IRS’s Web site usage increased in 2004, continuing a trend as shown in table 2. Further, there was extensive use of the “Remember Your Advanced Child Tax Credit” feature that was new in 2004. Overall we found that IRS’s Web site was user-friendly and generally easy to access and use. For example, based on our knowledge of the type of information taxpayers look for, we found (1) no broken links or outdated or inconsistent data; (2) facts and information were logically arranged and easy to obtain; (3) with few exceptions, search functions guided us to appropriate forms, publications and information; and (4) response time was quick. However, we still have concerns about the feature answering tax law questions. Two independent assessments done by Keynote and Brown University’s Center for Public Policy confirmed our observations on IRS’s Web site. Keynote, an independent Web site rater of Internet performance, reported that IRS’s site performed very well. Keynote reported that, for the average delivery time to download to IRS’s Web site homepage, IRS was in the top 10 out of 40 government agencies measured for 12 of 15 weeks during the filing season. It also reported that, during the filing season, IRS’s response time was consistent with other organizations being measured. Finally, Keynote reported that, when it comes to success rate (being able to access a desired location on the Web site) IRS was always at 99 or 100 percent for the filing season. Brown University’s Center for Public Policy rated IRS’s Web Site 5th out of 60 federal government Web sites in providing service to citizens. The quality of IRS’s Electronic Tax Law Assistance (ETLA) feature, which enables taxpayers and practitioners to ask tax law questions via the Internet and receive an e-mail response from IRS, has declined. Usage has also declined, apparently by design. IRS did not meet its 2004 goal to respond to e-mail questions within 2 business days. On average, IRS responded to e-mail questions in over 3 business days and only reached the 2 business day goal in 6 of the 27 tax law categories. Neither did IRS meet its accuracy goal. IRS’s performance data showed that IRS answered about 64 percent of the e-mail questions accurately, compared to its goal of 78 percent. IRS did not meet its goals because of a mismatch between the number of questions from taxpayers and staff available to respond to the questions. At the beginning of the 2004 filing season, IRS increased the prominence of the ETLA feature. However, an IRS official told us that this resulted in the site receiving more questions than staff could handle. The official said that in response IRS moved the ETLA feature to a less prominent position on the Web site. In fact, in its current location, IRS does not expect taxpayers to be aware of the ETLA feature unless they stumble upon it accidentally while looking for other information on IRS’s Web site. As a result, the ETLA feature was used significantly less this year than last. In 2004, taxpayers and practitioners submitted about 90,000 tax law and procedural questions via the Web site, down from 153,000 in 2003, a 41 percent decrease. IRS officials have decided not to move the ETLA feature to a more prominent location as of October 2004. Although the number of questions received from taxpayers via the Web site is small, when compared to the number of questions received over the telephone, providing accurate responses to these questions is particularly important. Not only do inaccurate and untimely responses disappoint the taxpayers asking the questions, IRS also runs the risk of widespread dissemination of inaccurate e-mail responses to other taxpayers. The decisions about the prominence of the feature and staffing are related. Achieving timeliness and accuracy goals depends, in part, on recognizing that decisions about the prominence of the feature, which affect taxpayer demand, and decisions about staffing are related. If IRS is unwilling to devote significant staff to answering tax law questions, then the feature cannot be prominent on the Web site. If IRS believes the feature is worth making prominent, then there are implications for staffing. For the 2004 filing season, IRS’s decisions failed to recognize the relationship. IRS continued to offer new services via its Web site. New services are popular with taxpayers, as shown by the increasing use of features, such as “Where’s My Refund?” shown in table 2. In 2004, IRS added the following customer service features for taxpayers and tax practitioners. “Remember Your Advanced Child Tax Credit” that allows taxpayers to check the amount of their child tax credit. “E- Services” which is a suite of Internet services for tax practitioners that is previously discussed in the processing section of this report. IRS improved its filing season performance this year compared to last. More importantly, in many areas the improved performance is part of a trend that IRS is sustaining over time. IRS’s filing season performance is important because it affects over 100 million taxpayers. Taxpayers want a quick turnaround on their refunds; they want easy access to IRS’s telephone assistors if they have questions; and they want to instantly download forms and publications when they need them. The improvements we found are the result of systematic, long-term efforts by IRS. Processing results have improved and resources have been saved because of IRS’s promotion of electronic filing, telephone access has improved because IRS implemented new call routing technology, and labor- intensive walk-in assistance is decreasing because of improvements to alternative services. IRS should be commended for its efforts to improve service. At the same time, however, we have identified several areas that present opportunities for further improvement. IRS has limited performance measures and data with which to assess the quality of services at walk-in and volunteer sites, primarily because its initiatives for doing so are only partially implemented. IRS deserves credit for planning initiatives to address these data limitations. However, we are concerned that until the walk-in initiative is fully implemented, IRS will have biased quality data on walk-in service. Furthermore, the history of delays and incomplete plans for the walk-in and volunteer site initiatives leaves us unsure about when IRS will fully implement them. Until IRS fully implements these initiatives and gathers data on quality, it may not be able to effectively monitor and improve performance at its walk-in sites or volunteer sites and, as a consequence, could be risking its credibility among taxpayers who use the sites and the community-based coalitions that prepare returns at volunteer sites. Finally, IRS failed to meet its goals for accuracy and timeliness of e-mail responses to tax law questions, at least in part, by failing to match taxpayer demand for the feature with staffing. The risk associated with providing inaccurate e-mail responses may be high because of the potential for widespread dissemination. To address problems with the data for assessing the quality of services at IRS walk-in and volunteer sites, we recommend that the Commissioner of Internal Revenue direct the appropriate officials to recognize and disclose the limitations of the Embedded Quality performance data that will be obtained by direct management observation in 2005 when interpreting and reporting on service quality at walk-in sites; ensure that the causes of delays in implementing improved quality measurement at walk-in sites are addressed; and ensure that the delays with the development and implementation of the Quality Assurance initiative at volunteer sites are addressed. With respect to the Web site’s ETLA feature, we recommend that the Commissioner recognize that decisions about the prominence and staffing to give the feature are related. The Commissioner of Internal Revenue provided written comments in a November 8, 2004, letter (see app. III). The Commissioner noted that the 2004 filing season was one of the best ever, with improved telephone service, timely and effective return processing, substantial increases in electronic filing, and a successful shift in the number of tax returns prepared in IRS walk-in sites to volunteer sites. The Commissioner said he appreciated that our report recognized IRS’s achievements for this filing season and over the past few years as well. The Commissioner agreed with the importance of recognizing and disclosing the limitations of performance data obtained by managers directly observing employees in 2005. He also said that IRS has communicated the limitations in briefings with all levels of management and outside stakeholders, and will continue to do so. However, the Commissioner differs with our assessment of the extent of the limitations in the data. While he agreed that there may be some bias in the data because employees know that their managers are observing them, he disagreed with our subsequent finding that data obtained through this method may not be representative of true performance. However, as our report indicates, because of the potential for bias and absence of other data, we cannot determine if the observed performance is representative or not. Further, according to IRS officials, the accuracy rates compiled by the quality review staff and managerial reviews cited by the Commissioner are not based on statistically representative samples. The Commissioner agreed with the recommendation to ensure that causes of delays in implementing improved quality measurement at walk-in sites are addressed. He said that while the delay in the implementation of contact recording as a means of collecting embedded quality data is attributable to the lengthy approval process, IRS expects to begin the contact recording pilot by January 2005 and is on target for post-pilot initial implementation for the last quarter of fiscal year 2005. Regarding volunteer sites, the Commissioner also agreed with the recommendation to implement quality initiatives in a timely manner and address delays. The Commissioner agreed with the intent of our recommendation about the ETLA feature, but disagreed with our assessment of the cause. He agreed that it is important to recognize the relationship between the prominence of the feature and staffing, but disagreed that the decline in ETLA performance is attributable to inadequate recognition of this relationship. He stated that the ETLA feature was inadvertently placed in a more prominent location on IRS’s Web site, thus creating unexpected demand. However, inadvertent placement suggests that the IRS needs to be more deliberative in recognizing the impact that placement of this feature can have on demand and staffing. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Finance, the House Committee on Ways and Means, and the Ranking Minority Member, Subcommittee on Oversight, House Committee on Ways and Means. We are also sending copies to the Secretary of the Treasury; the Commissioner of Internal Revenue; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Joanna M. Stamatiades, Assistant Director. Other major contributors are acknowledged in appendix IV. If you have any questions about this report, contact me on (202) 512-9110. As table 3 shows, the Internal Revenue Service (IRS) nearly met or exceeded seven out of the eight processing performance goals in 2004. For four measures (i.e., deposit error rate, deposit timeliness, refund interest paid, and productivity), IRS exceeded its goals. For two of the measures, IRS met their goals. For the remaining two, IRS missed its goals by a statistically significant amount. In the case of refund timeliness, however, the point estimate of 98.2 percent, nearly met the goal of 98.4 percent. In the case of letter error rate, the point estimate was 7 percent compared to the goal of 6.2 percent—13 percent greater than the 2004 goal. Comparing actual 2004 performance to 2003 shows that IRS’s performance improved or remained about the same for six out of the seven measures that could be compared. Table 3 also shows that IRS’s processing performance in 2004 has generally improved in comparison to 2002 and 2001 for the measures that could be compared. IRS’s performance during the filing season has significant budgetary implications because IRS spends thousands of staff years on its key filing season activities. used for key activities related to processing individual tax returns between fiscal years 1999 and 2003. The number of staff years used for key processing activities such as data transcription and correcting errors has decreased over 17 percent during this period. Beginning in fiscal year 2001, IRS separated the processing of individual and business tax returns and began consolidating paper processing centers. IRS officials expect additional savings when it further consolidates processing paper operations in Memphis, Tennessee in 2005. The source of data for this appendix is IRS’s time and attendance systems, the output of which is used to calculate IRS’s payroll expenses. IRS reports these payroll expenses annually in its Statement of Net Cost, the reliability of which GAO tests as part of our audits of IRS’s financial statements. These audits have concluded that for the fiscal years ended September 30, 2000 through 2003, IRS’s Statement of Net Cost was reliable. See GAO, Financial Audit: IRS’s Fiscal Years 2004 and 2003 Financial Statements, GAO-05-103 (Washington, D.C.: Nov. 10, 2004). Therefore, the data is likely to be reliable to show trends over time. Data for fiscal year 2004 were not available. Table 5 shows that IRS has consistently directed over 8,000 staff years to CSRs answering toll-free telephone calls that are not routed to automated services. The total staff years used for walk-in service has increased slightly between 2001 and 2003, ranging from 2,121 in fiscal year 2001, to 2,208 in fiscal year 2002, and 2,256 in fiscal year 2003. However, the time IRS spent directly on return preparation assistance at walk-in sites during the filing season has decreased significantly. As the demand for walk-in assistance has declined, IRS has assigned walk-in staff other responsibilities such as working on compliance cases. Figure 7 shows that direct Full-Time Equivalents decreased 62 percent for time spent directly (not counting overhead) on return preparation services between 2001 and 2004. At the same time, IRS reduced its reliance on compliance staff at walk-in sites. For years, IRS detailed staff from its compliance functions, such as Examination and Collection, to help provide walk-in assistance during the filing season. IRS has now limited the number of such details. IRS reduced the number of compliance Full-Time Equivalents detailed to assist at walk- in sites from 244 in the 2001 filing season to 9 in the 2004 filing season. In addition to those named above, Tiffany Brown, James Cook, Larry Dandridge, Evan Gilman, John Lesser, Karen O’Conor, Neil Pinney, and Amy Rosewarne made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Most taxpayers have their only contact with IRS during the filing season, with tens of millions filing their returns, getting refunds, and seeking assistance by calling or visiting IRS's offices or Web site. GAO was asked to assess IRS's performance in 2004 relative to goals and prior years' performance as well as initiatives or other factors that significantly affected performance for the following areas: (1) the processing of paper and electronic returns, (2) telephone service, (3) walk-in service, and (4) Web site service. During the 2004 filing season, IRS met many of its performance goals and continued a trend of improvement in recent years. However, IRS did not improve in all dimensions of its filing season services and lacks sufficient data to evaluate quality in others. IRS processed returns and issued refunds smoothly. The proportion of returns filed electronically is up to 47 percent. Despite this achievement and numerous initiatives to increase electronic filing, IRS does not expect to reach its long-term goal of having 80 percent of all individual tax returns filed electronically by 2007. A higher percentage of taxpayers was able to reach IRS assistors by telephone than last year and the accuracy rate for providing taxpayers with information about their accounts remained stable. However, the accuracy rate for answering tax law questions declined to 2001 levels. Consistent with IRS's strategy, the number of taxpayers visiting IRS walk-in sites declined, while the number having tax returns prepared at volunteer sites increased. Finally, although IRS continued to expand its Web site services, the site's feature for answering tax law questions raises some concerns. Despite the 2004 improvements, IRS has opportunities for further service improvements. For example, IRS has limited data with which to assess the quality of key services at its walk-in sites and sites staffed by volunteers. Although IRS has initiatives under way to measure quality at both types of sites, the initiatives have been delayed and important details have not yet been determined, which may undermine IRS's efforts to improve services in this area. In the meantime, some of IRS's quality data is likely to be biased. Until IRS fully implements its initiatives and gathers data on quality, it will have difficulty monitoring and improving performance at its walk-in sites and volunteer sites.
The U.S. election system is highly decentralized and relies on a complex interaction of people, processes, and technology. Voters, local election jurisdictions (which number over 10,000), states and territories, and the federal government all play important roles in the election process. The process, however, is primarily the responsibility of the individual states and territories and their election jurisdictions. As we reported in our 2006 testimony, states and territories have considerable discretion in how they organize the elections process; this is reflected in the diversity of procedures and deadlines that states and jurisdictions establish for voter registration and absentee voting. Furthermore, these states and jurisdictions use a variety of voting techniques, from paper ballots to faxes and e-mails. We also reported that the voter is ultimately responsible for being aware of and understanding the absentee voting process and taking the actions necessary to participate in it. The UOCAVA established that members of the military and their dependents of voting age living away from their legal residences (in or outside the United States) and American citizens who no longer maintain a permanent residence in the United States are eligible to participate by absentee ballot in all federal elections. According to DOD, the act covers more than 6 million people. Executive Order and DOD guidance related to the act include the following: Executive Order 12642, dated June 8, 1988, made the Secretary of Defense, or his designee, responsible for carrying out the federal functions under UOCAVA, including (1) compiling and distributing information on state absentee voting procedures, (2) designing absentee registration and voting materials, (3) working with state and local election officials, and (4) reporting to Congress and the President after each presidential election on the effectiveness of the program’s activities (including a statistical analysis of UOCAVA voters’ participation). DOD Directive 1000.4, updated April 14, 2004, assigned the Office of the Under Secretary of Defense for Personnel and Readiness responsibility for administering and overseeing the program, and it established the FVAP to manage the program. In 2006, FVAP officials told us that they were authorized a full-time staff of 13 and had a fiscal year budget of approximately $3.8 million. FVAP facilitates the absentee voting process for UOCAVA voters; its mission is to (1) inform and educate U.S. citizens worldwide about their right to vote, (2) foster voter participation, and (3) enhance and protect the integrity of the electoral process at the federal, state, and local levels. FVAP also, among other things, provides training opportunities for Voting Assistance Officers (service, State Department, and overseas citizen organization officials who carry out the implementation of their respective voting assistance programs); prescribes, coordinates, and distributes voting materials, such as the Federal Post Card Application (the registration and absentee ballot request form for UOCAVA voters); and provides for alternatives to regular mail, including Express Mail and the use of electronic solutions. The Election Assistance Commission, which was established by the Help America Vote Act of 2002, also contributes to the absentee voting process. The act specifically established the Commission as a national clearinghouse for election information and procedures and assigned it responsibility for developing voting system guidelines for the entire election process. The act also specifies that the development of voluntary voting system guidelines should be informed by research and development in remote access voting, including voting through the Internet, and the security of computers, networks and data storage. In 2005, the Commission issued guidelines that, among other things, addressed gaps in the security measures of prior standards. However, these guidelines do not comprehensively address telecommunications and networking services or their related security weaknesses, such as those related to the Internet. The act also amended UOCAVA to require states to report to the Commission, after each regularly scheduled general election for federal office, on the aggregate number of (1) absentee ballots transmitted to absentee uniformed services voters and overseas voters for the election and (2) ballots returned by those voters and cast in the election. The Commission collects this information through its biennial state surveys of election data. DOD, the Commission, and organizations representing UOCAVA voters have noted that these voters may effectively become disenfranchised because the multistep process for voting by absentee ballot—which relies primarily on mail—can take too long, especially for mobile servicemembers and overseas citizens or those deployed to or living in remote areas. Congress and DOD have taken action to facilitate the use of alternatives to mail, including electronic means such as fax, e-mail, and the Internet. Figure 1 shows (1) the laws designed to facilitate the use of electronic capabilities for UOCAVA voters and (2) some of DOD’s efforts, either voluntary or in response to a statute, to provide electronic capabilities to these voters during fiscal years 2000 through 2007. FVAP stated that it implemented the Voting Over the Internet project in 2000 as a small-scale pilot project to provide military personnel and their dependents and overseas citizens covered under UOCAVA the ability to securely register to vote, request and receive ballots from local election officials, and vote via the Internet. DOD voluntarily developed the project as a small-scale proof-of-concept Internet voting project. This project enabled 84 voters to vote over the Internet—the first time that binding votes were cast in this manner. While the project demonstrated that it was possible for a limited number of voters to cast ballots online, DOD’s report concluded that security concerns needed to be addressed before it could expand remote (i.e., Internet) voting to a larger population. In 2001, Congress noted that the Voting Over the Internet project had demonstrated that the Internet could be used to enhance absentee voting. To continue the examination of a secure, easy-to-use Internet voting system as an alternative to the regular mail process, Congress mandated, in the NDAA for Fiscal Year 2002, that DOD conduct a large-scale Internet- based absentee voting demonstration project to be used for the 2002 or 2004 federal election. DOD responded to this mandate by creating the Secure Electronic Registration and Voting Experiment (SERVE) for Internet-based absentee registration and voting; SERVE used a system architecture similar to the one used for the Voting Over the Internet project. However, as we previously reported, a minority report published by four members of the Security Peer Review Group—a group of 10 computer election security experts that FVAP assembled to evaluate SERVE—publicly raised concerns about the security of the system because of its use of the Internet. The four members suggested that SERVE be terminated because potential security problems left the information in the system vulnerable to cyber attacks that could disclose votes or personal voter information. Furthermore, they cautioned against the development of future electronic voting systems until the security of both the Internet and the world’s home computer infrastructure had been improved. Because DOD did not want to call into question the integrity of votes that would have been cast via SERVE, the Deputy Secretary of Defense terminated the project in early 2004, and DOD did not use it in the November 2004 election. The points raised in these security reviews are consistent with concerns we raised in our 2001 reports. We found that broad application of Internet voting presented formidable social and technological challenges. In particular, we noted that challenges to remote Internet voting involve securing voter identification information and ensuring that voters secure the computer on which they vote. We also reported that because voting requires more stringent controls than other electronic transactions, such as online banking, Internet voting systems face greater security challenges than other Internet systems. Furthermore, we found that remote Internet voting was recognized as the least protective of ballot secrecy and voter privacy and was most at risk from denial of service and malicious software, such as computer viruses. While opinions of groups considering the pros and cons of Internet voting were not unanimous, we found that they agreed in principle on major issues, including considering security to be the primary technical challenge for Internet voting. Because of serious concerns about protecting the security and privacy of the voted ballot, we concluded that Internet-based registration and voting would not likely be implemented on a large scale in the near future. In the Ronald W. Reagan NDAA for Fiscal Year 2005, Congress amended the requirement for the Internet-based absentee voting demonstration project by permitting DOD to delay its implementation until the first federal election after the Election Assistance Commission developed guidelines for the project. The conference report for the act stated that, although Congress recognized the technical challenges of Internet voting, SERVE was an important prototype that should not be abandoned. Since the 2000 federal election, DOD has established several initiatives as alternatives to the by-mail process to facilitate voter registration and ballot request, receipt of a ballot, and submission of a voted ballot by electronic means—such as fax and e-mail—for UOCAVA voters. These include the Electronic Transmission Service’s fax to e-mail and e-mail to fax conversion enhancement (hereafter referred to as the e-mail to fax conversion feature); the 2004 Interim Voting Assistance System (IVAS); the 2006 Integrated Voting Alternative Site (also called IVAS); DOD’s online voting assistance guidance; and online forms to register, request, receive, or submit ballots. While these efforts provide valuable guidance, services, and information to UOCAVA voters, some of them had limited participation rates or exhibited weaknesses in security, consistency, and accuracy that might hinder their use and effectiveness. DOD officials have acknowledged these weaknesses and they began taking action to address them during the course of our review. The electronic transmission service is a fax forwarding system, established by FVAP in 1990, that allows UOCAVA voters and state and local election officials, where permitted by law, to fax election materials to each other. These voters and election officials can use this service and do not have to pay long distance fees for faxing out of state, because DOD provides the service through a toll-free line. In 2003, after discussions with Mississippi state officials and a Mississippi National Guard unit, FVAP added the e-mail to fax conversion capability to its electronic transmission service. These officials asked FVAP for help in transmitting voting materials because, by state law, Mississippi allowed only faxing as an electronic means of transmission—a capability that the Guard unit would not have while it was deployed to Iraq. The e-mail to fax conversion feature allows UOCAVA voters who do not have access to a facsimile machine to send ballot requests, via e-mail, to DOD’s Electronic Transmission Service, which converts e-mail attachments to faxes and sends them to local election officials. In return, local election officials can send ballots to the Electronic Transmission Service conversion feature by fax; the conversion feature then converts the fax to an e-mail attachment and sends it to the voter. FVAP stated that it notifies states and territories whenever it converts an e-mail containing voting materials to a fax, or vice versa, so that the state or territory can decide whether or not to accept it. Table 1 shows Electronic Transmission Service activity for the conversion feature for 2004 and 2006. Although FVAP has made progress in assisting servicemembers to transmit voting materials with the e-mail to fax conversion enhancement, FVAP officials told us they have not fully complied with certain information security requirements in the Interim DOD Information Assurance Certification and Accreditation Process. This guidance requires DOD components, among other things, to implement controls and to certify and accredit such e-mail systems. FVAP officials initially stated that the information security guidance did not apply to the conversion feature; they saw it as an enhancement to the original Electronic Transmission Service’s fax system. During the course of our review, however, FVAP officials said they consulted with officials responsible for DOD’s information assurance certification and accreditation and concluded that the requirements did, in fact, apply. These officials stated that, by the end of fiscal year 2007, they plan to award a contract to obtain services to meet the information security requirements. The FVAP officials further stated that, while they do not have the required documentation—such as risk assessments or certification tests and accreditations—they have taken some measures to ensure security. We note that the statement of work for FVAP’s April 29, 2005, contract for the Electronic Transmission Service recognizes the sensitivity of the data associated with election materials and includes provisions for certain security functions, such as ensuring that adequate steps are taken to prevent unauthorized access or manipulation of the data. Until FVAP performs and documents the security assessments and certifications, however, it has not taken all the necessary measures to secure its system and comply with DOD’s information security requirements. Federal law includes a number of separate statutes that provide privacy protections for certain information. The major requirements for the protection of personal privacy by federal agencies come from two laws: the Privacy Act of 1974 and the privacy provisions of the E-Government Act of 2002. Section 208 of the E-Government Act of 2002 requires agencies, among other things, to conduct privacy impact assessments before developing, upgrading, or procuring information technology that collects, maintains, or disseminates personally identifiable information. DOD developed departmentwide guidance—the DOD Privacy Impact Assessment Guidance—for implementing the privacy impact assessment requirements mandated in the E-Government Act of 2002. In this guidance, DOD directs the components to adhere to the requirements prescribed by the Office of Management and Budget (OMB)—Guidance for Implementing the Privacy Provisions of the E-Government Act of 2002. FVAP officials stated that they had not conducted a privacy impact assessment for the Electronic Transmission Service’s e-mail to fax conversion enhancement, but they told us that a privacy impact assessment will be done as part of the previously mentioned contract to meet information security requirements. A privacy impact assessment would identify specific privacy risks to help determine what controls are needed to mitigate those risks associated with the Electronic Transmission Service. Furthermore, building in controls to mitigate risks could ensure that personal information that is transmitted is only used for a specified purpose. FVAP noted that when information is sent by e-mail, the conversion feature retains the following information: full name, fax number, city, state, zip code, and e-mail addresses. FVAP’s Electronic Transmission Service retains this personally identifiable information both to provide transmission verification or confirmation to users and to comply with election document retention requirements under the Civil Rights Act of 1960. In September 2004, just 2 months prior to the election, DOD voluntarily implemented what it reported as a secure electronic system for voters to request and receive ballots—the Interim Voting Assistance System (IVAS)—as an alternative to the traditional mail process. IVAS was open to active duty servicemembers, their voting age dependents, and DOD overseas personnel who were registered in a state or territory participating in the project and enrolled in the Defense Enrollment Eligibility Reporting System—a DOD-managed database that includes over 23 million records pertaining to active duty and reserve military and their family members, retired military, DOD civil service personnel, and DOD contractors. DOD had limited IVAS participation to UOCAVA voters who were affiliated with DOD because their identities could be verified in the Defense Enrollment Eligibility Reporting System. Voters obtained their state or territory ballots through IVAS by logging on to a special Web site and then requesting ballots from their participating local election jurisdictions. After the local election officials approved the requests and the ballots were finalized, IVAS notified voters via e-mail that the ballots were available to download and print. DOD reported that 108 counties in eight states and one territory agreed to participate in this 2004 IVAS; however, only 17 citizens downloaded their ballots from the site during the 2004 election. FVAP officials noted that participation was low, in part because this IVAS was implemented just 2 months before the election. FVAP further reported that many states did not participate—for a variety of reasons, including state legislative restrictions, workload surrounding regular election responsibilities, and lack of Internet access. FVAP officials noted that this system, which was maintained through the conclusion of the election, cost $576,000. In September 2006—again, just 2 months before the next general election—FVAP launched a follow-on Integrated Voting Alternative Site, also called IVAS, in response to a June 2006 legislative mandate to reestablish the 2004 IVAS. This 2006 IVAS expanded on the 2004 effort, by providing information on electronic ballot request and receipt options for all UOCAVA citizens in all 55 states and territories. It also provided two tools that registered voters could access through the FVAP Web site, using DOD or military identification, to request or receive ballots from local election officials. As with the 2004 IVAS, local election officials used information in these tools to verify the identity of UOCAVA voters who used them. The first tool—called Tool 1—contained a ballot request form only, accessed through DOD’s Web site, which voters could fill out and download to their computers. Voters could then send the downloaded form to the local election officials either by regular mail, fax, or unsecured e-mail, per state or territory requirements. FVAP officials reported to Congress that no information on the number of users was available on the use of Tool 1 because the department was no longer involved in the process once the voter downloaded the ballot request and they, essentially, had no visibility into what transpired directly between the voter and the election officials. The second tool—called Tool 2—provided a ballot request and receipt capability for voters, similar to the 2004 IVAS, which also allowed voters to fill out ballot request forms online, send them to local election officials through a secure line, and receive their state or territory ballots from the local election officials through a secured server. Again, no voted ballots were transmitted through this IVAS system given that it was not designed for that purpose. Absentee voters, instead, would return voted ballots, outside of IVAS, in accordance with state law. Tool 2 had a tracking feature which showed that 63 voters had requested ballots through the system. Of these, local election officials approved and made their state or territory ballots available to 35 UOCAVA voters. However, of the 35 sent out, local election officials reported that only 8 voted ballots were traced back to the IVAS Tool 2, in part because this IVAS was implemented just 2 months before the election. DOD reported that the total cost for the 2006 IVAS was about $1.1 million, and given that the tools were used only to request or receive ballots for the November 2006 elections, DOD removed the tools from FVAP’s Web site in January 2007. Table 2 compares and provides additional details on the two tools. Officials within Congress, and others, have expressed concerns that voters could be exposed to a heightened risk of identity theft if they used Tool 1 to send voting materials that contain personally identifiable information (including Social Security number, date of birth, and address), by unsecured e-mail. FVAP officials acknowledged in their December 2006 report to Congress that Tool 1 was less secure, but said (1) DOD was providing access to a capability that states already provide, (2) most states and territories only required the last four digits of the Social Security number on the ballot requests, and (3) Tool 1 displayed a cautionary statement that voters had to read to go on with the request process; this cautionary statement explained the risk associated with e-mailing ballot requests and that the government assumed no liability if voters did so. While we confirmed a cautionary statement related to the transmission of personal data did exist for Tool 1, it did not advise voters, after submitting their ballot request, to remove voting materials that they have stored on their computers. For example, voters using Internet cafes overseas could have been subject to identity theft if they did not delete their personal information from the computer and a subsequent user gained access to the stored file. FVAP officials acknowledged that users were not advised of the risks of storing personal voting information on their computers, and these officials stated that they will incorporate lessons learned, such as adding a cautionary statement in any future ballot request system. In addition to these initiatives, DOD also has established the FVAP Web site, which contains information on FVAP programs and links to assist UOCAVA voters in the voting process. Specifically, these links access FVAP’s online guidance, including several versions of FVAP’s biennial Voting Assistance Guide, shown in figure 2. This guide tells the UOCAVA voter how to register, request a ballot, receive a ballot, and vote the ballot electronically—including by e-mail or fax—where state or territory law allows this. One link on FVAP’s Web site had a full-text version of the guide, so that a Voting Action Officer or other user could download and print the entire guide and use it to provide assistance to absentee voters from various states and jurisdictions. Another link goes to a Web page containing “State-by-State Instructions,” where two additional links—one a PDF guide, the other an HTML version—are provided for each state or territory. This allows voters to read or print off only their own state’s or territory’s instructions and to have a choice of formats. Another link goes to the Integrated Voting Alternative Site—this site provides information for the 55 states and territories regarding the electronic ballot request and receipt options available to UOCAVA voters. FVAP’s Web site also has another link to News Releases, which contains updates on changes to the guidance, including changes to state laws that affect UOCAVA voters. Finally, a link goes to FVAP’s Voting Assistance Guide Errata Sheets—this contains changes that have been made to the archived Voting Assistance Guide since its last printing. Our review of the FVAP Web site, however, revealed inconsistencies in some of the information about electronic transmission options that the voters could access through different links on the site. Our analysis specifically showed that, while not widespread, for 14 of the 55 states or territories, some of the guidance regarding requirements for electronic transmission was inconsistent and could be misleading, as the following examples illustrate: For the state of California, we found that three of the FVAP links correctly stated that only overseas military and overseas civilian voters were eligible to receive or return a ballot by fax; a fourth link, however, did not include this restriction. As a result, military personnel stationed in the United States, but away from their state of residence, might conclude— incorrectly—that they were eligible to vote by fax. FVAP officials acknowledged this discrepancy and updated the information reached from the fourth link on January 25, 2007, to reflect the fact that uniformed servicemembers must be residing or deployed overseas to be able to receive and send ballots by fax. For the state of Colorado, we identified a news release that was issued on October 18, 2006, announcing a new initiative to allow uniformed servicemembers deployed outside the United States to request, receive, and return absentee ballots via e-mail. One other FVAP link reflected this change; however, four other links did not capture this change. FVAP officials acknowledged this discrepancy, updated two of the links, and issued an errata sheet on January 22, 2007. FVAP officials did not update the third link—the 2006-2007 Voting Assistance Guide accessed through the publications link on their Web site—stating that it was considered an archive document and was not intended for update. However, DOD did not clearly identify this link as an archived document; as a result, this link could mislead voters who relied on it. FVAP officials later acknowledged that the archived version of the 2006-2007 Voting Assistance Guide could have been labeled better, and eventually deleted this version from their Web site. Appendix II provides details on the inconsistencies we found on FVAP’s Web sites for 14 states and identifies the links, along with DOD’s responses regarding each. Under internal control guidance, organizations are to apply policies and procedures consistently. As noted previously, while the inconsistencies were not widespread, the fact that inconsistencies exist at all could lead UOCAVA voters— especially busy voters residing or deployed in remote locations—to rely on incorrect information and therefore adversely affect their ability to vote. Agency officials acknowledged these discrepancies and addressed them during the course of our review. In addition, FVAP administers two online forms, (1) the Federal Post Card Application, which allows absentee voters to register to vote or request ballots; and (2) the Federal Write-in Absentee Ballot, which allows absentee voters to vote even if they have not yet received the absentee ballot they requested from their state or territory. The Federal Post Card Application has been online since 1999, in PDF format, and is postage-free within the U.S. mail system when appropriate markings, provided on FVAP’s web site, are used. The online Federal Post Card Application allows voters to download a PDF version to their computers to complete, e-mail, print, sign, and send to their local election official via mail. Some state and local election officials we spoke with indicated that the online version of the Federal Post Card Application has many benefits because it is easy to fill out and read, and it provides sufficient space for the voter to write in. A UOCAVA voter can also use the Federal Write-in Absentee Ballot as a backup ballot when the state or territory has not sent a regular absentee ballot in time for the voter to participate in the election. On October 21, 2004, just a few weeks before the national election, FVAP issued a news release announcing the electronic version of the ballot as an emergency ballot. The Ronald W. Reagan NDAA for Fiscal Year 2005 amended the eligibility criteria in UOCAVA to allow states and territories to accept the Federal Write-in Absentee Ballot under a broader range of circumstances. Prior to the change, a UOCAVA citizen had to be outside of the United States, have applied for a regular absentee ballot early enough to meet state election deadlines, and not have received it from the state. Under the new criteria, the Federal Write-in Absentee Ballot can be used by military servicemembers and their dependents stationed in the United States, as well as by military personnel, their dependents, and citizens living overseas. The Election Assistance Commission has not yet developed the Internet absentee voting guidelines, and because it is required by law to develop them for DOD’s use in the secure, Internet-based, absentee voting demonstration project, DOD has not moved ahead with the project. Commission officials told us that they have not yet developed the required Internet absentee voting guidelines because the Commission has been working on other priorities—including standards for electronic voting machines, challenges associated with these electronic voting machines, and a process for certification and accreditation—and it lacks the resources to work on the Internet absentee voting guidelines or the mandated study of the issues and challenges for Internet technology at the same time. Although the Internet voting study is now underway, the Commission has said that it will not be completed until September 2007 and thus does not have the results it needs to establish time frames or a plan for developing the guidelines. Regarding the demonstration project, DOD officials stated that they had not taken action to develop this project because the Ronald W. Reagan NDAA for Fiscal Year 2005 requires the Commission to develop the guidelines first. DOD officials stated that, in an effort to assist the Commission in developing the Internet absentee voting guidelines, they have provided information on prior Internet voting efforts, along with challenges associated with these Internet voting efforts and views on how to mitigate those challenges. Commission officials stated that they have not developed Internet absentee voting guidelines because the Commission and the organizations that would normally provide assistance to it are directing their constrained resources to other priorities. This includes addressing challenges associated with electronic voting machines and establishing a process for certification and accreditation. Additionally, the Help America Vote Act of 2002 requires the Commission’s Technical Guidelines Development Committee to assist the Executive Director of the Commission in developing voluntary voting system guidelines. The act also requires the Director of the National Institute of Standards and Technology to provide the Development Committee with technical support in developing those guidelines, including research and development related to computer and network security, voter privacy, remote access voting (including voting through the Internet), and voting fraud. Commission officials told us, however, that the Development Committee has not been able to work on Internet absentee voting guidelines for UOCAVA voters because it had other priorities and constraints on its resources. In light of the Development Committee’s low priority for working on the Internet absentee voting guidelines, officials from the Commission asked officials from the National Institute of Standards and Technology to assist with developing the guidelines. However, officials from the National Institute of Standards and Technology said that they could not provide support because they also lacked sufficient resources at the time. Commission officials told us that, at the time of our review, the National Institute of Standards and Technology was also using its resources to work with the Development Committee on the current voluntary voting guidelines and would not have sufficient resources to work on Internet absentee voting guidelines until after July 2007. Additionally, Commission officials stated that they were waiting for DOD to provide information that describes the type of system around which the guidelines should be developed. DOD officials, however, stated that they gave the Commission reports that provided the framework for the Internet-based absentee voting system they envisioned. Specifically, these DOD officials told us that they provided the Commission, in 2004, with a report on their 2000 proof of concept for Internet-based voting called “Voting Over the Internet,” and in March 2006, they provided the Commission with an internal DOD document assessing the terminated SERVE project. DOD and Commission officials told us that they had not communicated in depth on the guidelines and the DOD system before our review. To gain a better understanding of the Internet voting environment, in September 2006, the Commission started an Internet voting study as a precursor to developing the Internet absentee voting guidelines. The Help America Vote Act of 2002 required the Commission to conduct this study to determine the issues and challenges presented by incorporating communications and Internet technology into elections, including the potential for election fraud, and to issue a report no later than June 29, 2004. However, the Commission did not meet this reporting date. Commission officials told us that they were unable to complete the study sooner—or even begin it—because of the resource constraints they have worked under since the Commission’s inception, and because they were working on other priorities. They noted, for example, that under the act, the Commission was to be established by February 26, 2003, but the Commissioners were not appointed until almost a year later, in December, 2003. They also told us that, although 23 employees were allocated to the Commission, they had to build up staff gradually, starting in January 2004, by hiring two employees each month. Accordingly, Commission officials testified in June 2004 that, as a result of these constraints, the Commission was able to meet only some of its mandates, such as developing the 2005 Voluntary Voting System Guidelines. As a result, the Commission was not able to conduct the Internet voting study in a timely manner. Commission officials stated that the Internet voting study, which was underway during the course of our review, includes several case studies to monitor current Internet voting usage and electronic transmission of ballots. The four states participating in this part of the study are Florida, Montana, South Carolina, and Illinois. The study also includes (1) a survey of UOCAVA voters to collect information on their level of interest in electronic voting and (2) a conference to gather states’ experiences on topics such as Internet voting, electronic transmission of ballots, security risks for voting systems, and verification of voters’ identities. Commission officials told us that they plan to issue a final report on the Internet voting study in September 2007. The Ronald W. Reagan NDAA for Fiscal Year 2005 did not establish a deadline by which the Commission was to complete the Internet absentee voting guidelines, and the Commission has not set time frames for itself, primarily because it has been working on guidelines for current voting systems. Additionally, as stated previously, the Commission has not completed the precursor Internet voting study to identify critical issues and challenges such as those related to security and privacy. Also, it has not established a plan, in conjunction with major stakeholders like DOD, to develop appropriate guidelines for Internet voting with specific tasks that would address security risks such as those identified in its study and other security evaluations and reports, as well as time frames and milestones. In previous reports, we have noted that leading organizations develop long-term results-oriented plans that involve all stakeholders and identify specific tasks, milestones, time frames, and contingency plans; this practice is also embodied in the underlying principles of the Government Performance and Results Act of 1993. Similarly, without a plan for the UOCAVA Internet absentee voting guidelines—including specific tasks, time frames, milestones, necessary resources, and alternatives—the Commission cannot inform Congress, FVAP, and local election officials when it will meet the mandate to develop the required guidelines. As we previously noted, some technologies may not yet be mature enough to support Internet voting. Therefore, the plan for developing Internet absentee voting guidelines may require an incremental approach that reflects emerging solutions to security and privacy challenges, as well as changing views on acceptable levels of risk and cost. Similarly, DOD has not developed a secure, Internet-based absentee voting demonstration project, as Congress mandated in the Ronald W. Reagan NDAA for Fiscal Year 2005. DOD reported that the principal objective of the Internet-based electronic demonstration project was to assess the use of such technologies to improve UOCAVA participation in elections. The department planned to conduct the project during the first general election for federal office after the Commission has established Internet voting guidelines for the project. However, DOD has not moved forward with the electronic demonstration project because, by law, the Commission must first develop the Internet absentee voting guidelines. DOD officials stated, as mentioned previously, that they provided information to assist the Commission in developing the guidelines, and Commission officials acknowledged that DOD had provided them with a report on “Voting Over the Internet,” DOD’s assessment of its November 2000 Internet-based voting project, in 2004—the first year of the Commission’s operation. DOD also provided the Commission with an internal document that contained information on its SERVE project. However, Commission officials told us that they did not receive the SERVE document until June 2006. This document discussed challenges DOD identified with Internet voting, which included security threats such as computer viruses, malicious insider attacks, and inadvertent errors that could disrupt system performance. In 2001, we also identified several challenges to Internet voting, such as privacy and security. As previously mentioned, we reported that broad application of Internet voting faced formidable challenges, including the difficulty of providing adequate voter privacy—that is, protecting the voter’s ability to cast a ballot without being observed. We further reported that, although not unanimous on all issues, groups considering the pros and cons of Internet voting were in consensus in identifying security as the primary technical challenge for Internet voting. We also reported that, because of the security risks involved, Internet voting would not likely be implemented on a large scale in the near future. Moreover, DOD officials told us that even if the Commission had developed Internet voting guidelines at the time of our review, DOD would not have been able to develop a secure, Internet-based, electronic demonstration project in time for the 2008 presidential election. DOD officials said that—depending on the Internet voting guidelines provided by the Commission—the final system design, full development, testing and deployment phases would take an estimated 24 to 60 months. Furthermore, deployment of any system requires participation of the military services, which have many additional, competing priorities that may cause delays in deployment. Given that less than 17 months remain before the November 2008 election, FVAP officials said there is insufficient time to advertise and launch the Internet-based electronic demonstration project. We observed that DOD was developing, but had not yet completed, plans to expand the use of electronic voting technology for UOCAVA voters use in federal elections through November 2010, as required by the John Warner NDAA for Fiscal Year 2007. DOD officials told us that they anticipated providing the plans to Congress, in accordance with the act, by May 15, 2007. Because electronic voting initiatives for the absentee voting process (fax, e-mail, and Internet) involve numerous stakeholders at the federal level—including DOD and the Commission—as well as the various state and local levels, developing a plan is key. Implementation of new electronic voting initiatives requires careful planning, particularly in light of the remote location of troops, the application of new technology, and the lead time required for implementation. As DOD develops these plans, employing a comprehensive strategic approach that incorporates sound management principles could provide a framework for DOD’s plans. Our analyses of DOD and Commission documents and our interviews— including those with officials from these agencies, organizations representing UOCAVA voters, and state and local election officials—show that DOD did not obtain sufficient stakeholder involvement in planning its recent electronic voting initiatives—the 2004 and 2006 IVAS initiatives. In fact, Commission officials mentioned that DOD’s recent initiatives took a “top down” approach and did not seek input from the Commission or from local jurisdictions during the planning stage. DOD officials noted that both the 2004 and 2006 IVAS initiatives were planned, designed, advertised, and implemented just months before those two elections. In the case of the 2006 IVAS, however, the department reported that it developed the system within 79 days of passage of the mandate—June 2006—and noted that it was in fact responsive to that mandate. The Commission and state and local election officials noted that the aggressive schedules for these latest electronic initiatives did not allow sufficient time to enable full participation, training, and dissemination of information on the efforts. Additionally, at the time of our review, DOD officials said they had not yet established interim tasks that address issues such as security and privacy, milestones, time frames, and contingency plans. The principles of sound management used by leading organizations and embodied in the Government Performance and Results Act of 1993 provide a methodology to establish a results-oriented framework for DOD to develop its detailed plans. Such a framework would provide a firm foundation for DOD’s long-term plan for electronic voting initiatives. Some of the key management principles include (1) involving stakeholders when defining the mission and outcomes, (2) identifying specific actions and tasks, such as monitoring and assessing security of the initiatives, (3) developing schedules and time frames for tasks, and (4) evaluating the overall effort, with specific processes to allow for adjustments and changes. Furthermore, as we reported in one of our executive guides, leading organizations plan for a continuous cycle of risk management. This includes determining needs, assessing security risks, implementing policies and controls, promoting awareness, and monitoring and evaluating controls. Combined with effective leadership, these principles provide decision makers with a framework to guide program efforts and the means to determine if these efforts are achieving the desired results. In its December 2006 report to Congress on IVAS, DOD stated the following: Development of a long-term strategic plan was necessary to ensure that all related initiatives were effectively integrated, but this was dependent on having sufficient time to assess, improve, and evaluate new or evolving electronic alternatives. Major recommendations for its future electronic voting projects would include, for example, recognizing the variation in state and local laws, procedures, and systems; identifying and mitigating actual and perceived risks, by educating people about risk management practices; and building consensus among key stakeholders. As stated previously, Commission officials told us that, for recent initiatives, DOD did not seek input from the Commission or local jurisdictions during the planning stage of these efforts. Without a proactive, integrated, long-term, results-oriented plan that involves all major stakeholders; includes goals, interim tasks—such as identifying security risks and addressing privacy concerns—milestones, time frames, and contingency plans; and follows the sound management practices used by leading organizations, DOD is not in a position to address congressional expectations to establish secure and private electronic and Internet-based voting initiatives. It is imperative that the 6 million Americans who are covered under the Uniformed and Overseas Citizens Absentee Voting Act have the opportunity to exercise their right to vote—one of the hallmarks of a democratic society. The fact that time is an issue with absentee voting by regular mail has led many to look toward electronic and Internet voting, which represent the next generation of voting technology, as alternatives. While these alternatives may expedite the absentee voting process, they are more vulnerable to privacy and security compromises than the conventional methods now in use. Electronic and Internet voting require safeguards to limit such vulnerabilities and prevent compromises to votes from intentional actions or inadvertent errors. However, available safeguards may not adequately reduce the risks of compromise. To date, the Election Assistance Commission has not assessed the risks or possible safeguards for Internet voting, nor has it developed corresponding guidelines that define minimum Internet voting capabilities and safeguards to be considered by the election community. Furthermore, electronic and Internet-based absentee voting can be challenging for UOCAVA voters, who reside at multiple locations across the globe. These voters are also registered to vote in thousands of local jurisdictions across 55 states and territories that employ varying levels of technology—from paper ballots to faxes and e-mail. DOD faces significant challenges in leveraging electronic and Internet technology to facilitate this complex, global absentee voting process. Delays in developing guidelines and a demonstration project have resulted in two presidential elections passing without significant progress in moving toward expanded use of electronic and Internet absentee voting. DOD officials told us it is now too late in the cycle to implement significant changes before the 2008 election. The challenges of coordinating among numerous stakeholders—including DOD, the Commission, and state and local election officials, as well as organizations representing UOCAVA voters—are substantial, and, to date, efforts to involve stakeholders in the planning stage of DOD’s recent initiatives have fallen short. This delay has left an expectation gap between what Congress required and what has been accomplished so far. Several steps would have to be taken to overcome these challenges, including better coordination between the Commission and DOD regarding their complementary roles in developing Internet voting guidelines and the mandated demonstration project. Unless the Commission and DOD move in a timely manner to assess the technology risks, develop guidelines that address the risks, coordinate among election stakeholders, and establish and execute prudent plans, they are unlikely to meet the expectations of Congress and military and overseas voters to establish a secure and private electronic and Internet-based UOCAVA voting environment. To improve the security and accuracy of DOD’s electronic and Internet initiatives, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to take the following four actions: Comply with the information security requirements in the DOD Certification and Accreditation Process guidance. Incorporate lessons learned into plans for future systems such as those we identified, including adding cautionary statements to future ballot request and receipt systems to warn UOCAVA voters to remove personal data from their computers. Institutionalize a process to review online UOCAVA guidance to ensure that DOD provides accurate and consistent information to UOCAVA voters. Create an integrated, comprehensive, long-term, results-oriented plan for future electronic voting programs that specifies, among other things, the goals to be achieved along with tasks including identifying safeguards for the security and privacy of all DOD’s voting systems—both electronic and Internet. The plan should also specify milestones, time frames, and contingencies; synchronize them with planned development of the Commission’s guidelines for Internet voting; and be developed in conjunction with major stakeholders—including state and local election officials, the Election Assistance Commission, overseas voting groups, and each of the armed services. The plan should also include initiatives that will be done well in advance of federal elections, to allow adequate time for training and dissemination of information on the options available to UOCAVA voters. To improve the Election Assistance Commission’s efforts to comply with the direction from Congress to develop the Internet absentee voting guidelines, we recommend that the Commission take the following two actions: Determine, in conjunction with major stakeholders like DOD, whether the Commission’s 2007 Internet voting study and any other Commission efforts related to Internet or electronic voting are applicable to DOD’s plans for Internet-based voting, and incorporate them where appropriate. Develop and execute, in conjunction with major stakeholders—including state and local election officials and DOD—a results-oriented action plan that specifies, among other things, goals, tasks, milestones, time frames, and contingencies that appropriately address the risks found in the UOCAVA voting environment—especially risks related to security and privacy. In written comments on a draft of this report, DOD concurred with our recommendations to (1) comply with the information security requirements, (2) incorporate lessons learned into plans for future systems—to include adding cautionary statements to warn UOCAVA voters to remove personal data from their computers, (3) institutionalize a process to review online UOCAVA guidance, and (4) create a comprehensive, results-oriented, long-term plan for future electronic voting initiatives. The department said that it will contract for services to comply with the information security requirements and will incorporate identified lessons learned into future registration, ballot request, and ballot receipt systems. The department said that it has already streamlined its online guidance by, among other things, eliminating the archived “Publications” version of the Voting Assistance Guide entirely; it will also establish a revised review process for online information. DOD noted that these changes will reduce the possibility of human error and simplify the review and verification process of online information. Finally, DOD stated that it was in full support of a long-term, comprehensive plan for future electronic voting projects that would allow for sufficient time to involve the major stakeholders, train, and disseminate information and ultimately serve UOCAVA voters. The department said it looked forward to working on this multiyear project plan in cooperation with the Election Assistance Commission, the National Institute of Standards and Technology, and other major stakeholders. It further stated that FVAP, the Commission, and the National Institute of Standards and Technology are scheduling a meeting to lay the groundwork for the plan. DOD’s comments are reprinted in appendix III. DOD also provided technical comments, which we incorporated in the final report, as appropriate. In its written comments, the Election Assistance Commission concurred with our recommendations to (1) determine the applicability of the Commission’s 2007 Internet voting study and other Commission studies to DOD’s plans for Internet-based voting, and (2) develop and execute a results-oriented action plan to provide guidelines that appropriately address the risks found in the UOCAVA voting environment. The Commission stated that it has already met with FVAP and the National Institute of Standards and Technology and agreed to develop a time line for creating the UOCAVA guidelines. The Commission’s comments are reprinted in appendix IV. We are sending copies of this report to the Secretary of Defense and the Under Secretary of Defense (Personnel and Readiness) and the Commissioners of the Election Assistance Commission. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions about this report, please contact me at (202) 512-5559. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To assess DOD’s electronic initiatives, we reviewed and analyzed relevant laws, directives, and guidance. These included DOD Directive 1000.4, Federal Voting Assistance Program (FVAP), updated April 14, 2004; and DOD’s Interim Department of Defense (DOD) Certification and Accreditation (C&A) Process Guidance, dated July 6, 2006. We also reviewed applicable requirements documents for DOD’s electronic efforts, as well as relevant reports by GAO, DOD, FVAP, the DOD Inspector General, and others, including A Security Analysis of the Secure Electronic Registration and Voting Experiment (SERVE), dated January 21, 2004. In addition, we reviewed FVAP’s 2006-2007 Voting Assistance Guide and its Web site to ascertain what type of information on electronic voting alternatives is provided to UOCAVA citizens. We interviewed key program officials at the Office of the Under Secretary of Defense for Personnel and Readiness’s Federal Voting Assistance Program (FVAP), the Business Transformation Agency, the Defense Manpower Data Center, and Voting Action Officers from several service headquarters. We also contacted officials from (1) election organizations, including the National Association of Secretaries of State and Joint Election Officials Liaison Committee and (2) organizations representing UOCAVA voters, including those from the National Defense Committee and the Overseas Vote Foundation. We made contact with officials from 14 of the 16 state and local election offices we called to obtain their perspectives on DOD’s initiatives. Specifically, we included all 11 states that had participated in DOD’s 2006 Integrated Voting Alternative Site— some of which participated in SERVE and other DOD programs and initiatives. We also included three other states that had 10 or more military bases and had participated in SERVE though not in IVAS. Table 3 lists the states we contacted and the programs in which these states participated. To determine the Commission’s efforts to develop Internet voting guidelines and DOD’s efforts to develop the secure, Internet-based, absentee voting demonstration project, we reviewed and analyzed relevant laws, Commission reports, and to the extent they existed, the Commission’s strategic plan and other documents to ascertain its plans and efforts to develop Internet voting guidelines for UOCAVA voters. We also reviewed and analyzed various DOD requirements documents, GAO reports, internal DOD reports, and other reports related to DOD’s prior Internet-based absentee voting initiatives—Voting Over the Internet and SERVE—to ascertain, among other things, challenges and benefits associated with Internet voting efforts. Additionally, we interviewed key program officials within FVAP, including the Director and Deputy Director of FVAP and the Project Manager for SERVE, who is currently retired, along with officials on DOD’s private sector Security Peer Review Group. We also spoke with officials on the Commission’s Technical Guidelines Development Committee and with the National Institute of Standards and Technology. To ascertain DOD’s efforts to develop plans to expand the use of electronic voting technologies in the future, we reviewed and analyzed laws, guidance, and reports to determine DOD’s current and future plans for the Internet-based absentee voting demonstration project. Additionally, we examined, to the extent they existed, DOD’s strategic plan and other documentation to determine its current and future plans for the Internet- based absentee voting demonstration project. We also interviewed responsible officials within DOD about these plans—including the Principal Deputy Under Secretary of Defense for Personnel and Readiness and the Director and Deputy Director of FVAP. We conducted our work from August 2006 through April 2007 in accordance with generally accepted government auditing standards. During the course of our review, we compared and analyzed the voting assistance guidance provided on DOD’s Federal Voting Assistance Program (FVAP) Web site that covered electronic alternatives to mail. The online links we reviewed included FVAP’s: (1) 2006-2007 Voting Assistance Guide (VAG)—a PDF version; (2) 2006-2007 VAG—an HTML version; (3) the archived 2006-2007 VAG—a PDF version dated October 25, 2005; (4) changes to the archived 2006-2007 VAG—called Errata Sheets; (5) News Releases; and (6) the 2006 Integrated Voting Alternative Site (IVAS). While not widespread, for 14 of the 55 states and territories, we found differences in some of the guidance provided on these links. Table 4 shows the differences we identified. In addition to the individual named above, David E. Moser, Assistant Director; Marion A. Gatling; Pawnee A. Davis; Amber M. Lopez; Joanne Landesman; Paula A. Moore; John K. Needham, John J. Smale; and Julia C. Matta made key contributions to this report. Elections: All Levels of Government Are Needed to Address Electronic Voting System Challenges. GAO-07-576T. Washington, D.C.: March 7, 2007. Elections: DOD Expands Voting Assistance to Military Absentee Voters, but Challenges Remain. GAO-06-1134T. Washington, D.C.: September 28, 2006. Elections: The Nation’s Evolving Election System as Reflected in the November 2004 General Election. GAO-06-450. Washington, D.C.: June 6, 2006. Election Reform: Nine States’ Experiences Implementing Federal Requirements for Computerized Statewide Voter Registration Lists. GAO-06-247. Washington, D.C.: February 7, 2006. Elections: Views of Selected Local Election Officials on Managing Voter Registration and Ensuring Eligible Citizens Can Vote. GAO-05-997. Washington, D.C.: September 27, 2005. Elections: Federal Efforts to Improve Security and Reliability of Electronic Voting Systems Are Under W ay, but Key Activities Need to Be Completed. GAO-05-956. Washington, D.C.: September 21, 2005. Elections: Additional Data Could Help State and Local Elections Officials Maintain Accurate Voter Registration Lists. GAO-05-478. Washington, D.C.: June 10, 2005. Department of Justice’s Activities to Address Past Election-Related Voting Irregularities. GAO-04-1041R. Washington, D.C.: September 14, 2004. Elections: Electronic Voting Offers Opportunities and Presents Challenges. GAO-04-975T. Washington, D.C.: July 20, 2004. Elections: Voting Assistance to Military and Overseas Citizens Should Be Improved. GAO-01-1026. Washington, D.C.: September 28, 2001. Elections: The Scope of Congressional Authority in Election Administration. GAO-01-470. Washington, D.C.: March 13, 2001.
The Uniformed and Overseas Citizens Absentee Voting Act (UOCAVA) protects the rights of military personnel, their dependents, and overseas citizens to vote by absentee ballot. The Department of Defense (DOD) and others have reported that absentee voting, which relies primarily on mail, can be slow and may, in certain circumstances, serve to disenfranchise these voters. In 2004, Congress required DOD to develop an Internet-based absentee voting demonstration project and required the Election Assistance Commission--which reviews election procedures--to develop guidelines for DOD's project. In 2006, Congress required DOD to report, by May 15, 2007, on plans for expanding its use of electronic voting technologies and required GAO to assess efforts by (1) DOD to facilitate electronic absentee voting and (2) the Commission to develop Internet voting guidelines and DOD to develop an Internet-based demonstration project. GAO also assessed DOD's efforts to develop plans to expand its use of electronic voting technologies. GAO interviewed officials and reviewed and analyzed documents related to these efforts. Since 2000, DOD has developed several initiatives to facilitate absentee voting by electronic means such as fax or e-mail; however, some of these initiatives exhibited weaknesses or had low participation rates that might hinder their effectiveness. For example, the 2003 Electronic Transmission Service's fax to e-mail conversion feature allows UOCAVA voters who do not have access to a fax machine to request ballots by e-mail and then converts the e-mails to faxes to send to local election officials. DOD officials told us, however, they have not performed, among other things, certification tests and thus are not in compliance with information security requirements. The 2004 Interim Voting Assistance System (IVAS)--which, DOD reported, enabled UOCAVA voters to request and receive ballots securely--cost $576,000, and 17 citizens received ballots through it. The 2006 Integrated Voting Alternative Site (also called IVAS)--which enabled voters to request ballots using one tool, by mail, fax, or unsecured e-mail--raised concerns, from Congress and others, that using unsecured e-mail could expose voters to identity theft if they transmit personal data. While this IVAS displayed a warning that voters had to read to proceed, it did not advise them to delete personal voting information from the computers they used. DOD spent $1.1 million, and at least eight voted ballots were linked to this 2006 IVAS. Both the 2004 and 2006 IVAS were each implemented just 2 months before an election. DOD also has a Web site with links to guidance on electronic transmission options, but some of this guidance was inconsistent and could be misleading. DOD officials acknowledged the discrepancies and addressed them during GAO's review. The Election Assistance Commission has not developed the Internet absentee voting guidelines for DOD's use, and thus DOD has not proceeded with its Internet-based absentee voting demonstration project. Commission officials told GAO that they had not developed the guidelines because they had been devoting constrained resources to other priorities, including challenges associated with electronic voting machines. Furthermore, they have not established--in conjunction with major stakeholders like DOD--tasks, milestones, and time frames for completing the guidelines. The absence of such guidelines has hindered DOD's development of its Internet-based demonstration project. To assist the Commission, however, DOD has shared information on the challenges it faced in implementing prior Internet projects--including security threats. GAO observed that DOD was developing, but had not yet completed, plans for expanding the future use of electronic voting technologies. Because electronic voting in federal elections involves numerous federal, state, and local-level stakeholders; emerging technology; and time to establish the initiatives, developing results-oriented plans that identify goals, time frames, and tasks--including addressing security issues--is key. Without such plans, DOD is not in a position to address congressional expectations to establish secure and private electronic and Internet-based voting initiatives.
Modernization of agency financial management systems has been an ongoing challenge due, in part, to federal agency attempts to develop and implement their own stovepiped systems that all too often have resulted in failure, been delayed, or cost too much. Recognizing the need for a more holistic approach to address the seriousness of these problems, OMB launched the FMLOB initiative in March 2004, in connection with the 2001 President’s Management Agenda (PMA). In part, the FMLOB initiative is intended to reduce the cost and upgrade the quality and performance of federal financial management systems by leveraging shared service solutions and implementing other governmentwide reforms that foster efficiencies in federal financial operations. According to OMB, the goals of the FMLOB initiative are to (1) provide timely and accurate data for decision making; (2) facilitate stronger internal controls that ensure integrity in accounting and other stewardship activities; (3) reduce costs by providing a competitive alternative for agencies to acquire, develop, implement, and operate financial management systems through shared service solutions; (4) standardize systems, business processes, and data elements; and (5) provide for seamless data exchange between and among federal agencies by implementing a common language and structure for financial information and system interfaces. In connection with this initiative, OMB developed an approach for agencies to migrate financial management systems to a limited number of application service providers, such as OMB-designated shared service providers or private sector entities, which is intended to avoid costly and redundant agency investments in “in-house” financial management systems. These providers are third-party entities that manage and distribute software-based services and solutions to customers across a wide area network from a central data center. This concept has commonly been used in the private sector and in other foreign governments where application service providers provide services such as payroll, sales force automation, and human resource applications to many clients. OMB is the executive sponsor for the FMLOB initiative and in conjunction with FSIO, provides oversight and guidance for the initiative. In addition to serving as the program manager for the FMLOB initiative, FSIO is responsible for core financial systems requirements development, testing and product certification, supporting the federal financial management community on priority projects, and other activities. Although the FMLOB initiative was launched in 2004, modernizing federal financial management systems so they can produce reliable, useful, and timely financial data needed to efficiently and effectively manage the day- to-day operations of the federal government has been a high priority for Congress for many years. In recognition of this need, and in an effort to improve overall federal financial management, Congress passed a series of financial management reform legislation dating back to the early 1980s. Some of the notable legislation included in this series are the (1) Federal Managers’ Financial Integrity Act of 1982 (FMFIA), (2) CFO Act of 1990, (3) Government Performance and Results Act of 1993, (4) Government Management Reform Act of 1994, (5) FFMIA, (6) Clinger-Cohen Act of 1996, and (7) Accountability of Tax Dollars Act of 2002. FFMIA, in particular, requires the departments and agencies covered by the CFO Act to implement and maintain financial management systems that comply substantially with (1) federal financial management systems requirements, (2) applicable federal accounting standards, and (3) the U.S. Government Standard General Ledger at the transaction level. In addition to the specific requirements related to financial management systems contained in FFMIA, the Clinger-Cohen Act of 1996 requires the head of each executive agency to establish policies and procedures to ensure that, among other things, the agency’s financial systems are designed, developed, maintained, and used effectively to provide financial or program performance data. OMB plays a central role in governmentwide efforts to meet the requirements included in these reforms including the establishment of federal financial management policy and guidance, as well as overseeing the implementation and management of federal financial management systems and other IT investments. Specifically, the CFO Act of 1990 established OMB’s Office of Federal Financial Management (OFFM) to carry out various financial management functions, including (1) providing overall direction and leadership to the executive branch on financial management matters by establishing financial management policies and requirements, and by monitoring the establishment and operation of federal government financial management systems; (2) reviewing agency budget requests for financial management systems and operations; and (3) monitoring the financial execution of the budget in relation to actual expenditures, including timely performance reports. The Clinger-Cohen Act of 1996 expanded OMB responsibilities further to include establishing processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by executive agencies. In addition, in implementing the E-Government Act of 2002 OMB’s Office of Electronic Government and Information Technology is responsible for, among other matters, providing overall leadership and direction to the executive branch on electronic government; overseeing the development of enterprise architectures within and across agencies; and overseeing implementation of IT throughout the federal government, including monitoring and consulting on agency technology efforts, as well as identifying opportunities for joint agency and governmentwide IT projects. In connection with these responsibilities, OMB reviews and evaluates IT spending and other information submitted by the agencies during the budget formulation process. Specifically, in accordance with OMB Circular No. A-11, Preparation, Submission and Execution of the Budget, agencies are required to provide information related to their IT investment projects. As part of this process, agencies submit Capital Asset Plans and Business Cases (exhibit 300s) and Agency IT Investment Portfolios (exhibit 53s) that provide information useful for evaluating agency financial management system projects. Agency exhibit 300s are intended to describe the business case for each investment and serve as the primary means of justifying IT investment proposals as well as monitoring IT investments once they are funded. Further, as a reporting tool, exhibit 300s are intended to help demonstrate to agencies’ management, as well as to OMB, that major projects have strong business cases for the investment and meet other administration priorities in defining the proposed cost, schedule, and performance goals. Similarly, information included on agency exhibit 53s is designed, in part, to help OMB better understand the amounts agencies are spending on IT investments as well as provide information in support of cost analyses prescribed by the Clinger-Cohen Act of 1996. For example, agencies are required to classify investment projects into one of six categories as well as specify how much of these amounts are for development and modernization of IT versus operating and maintaining the status quo for IT. In addition, agencies are required to report amounts being spent on each investment over a 3-year period including the current and prior fiscal years as well as the amount included in the agencies’ budget request for the next fiscal year. As part of the Budget of the United States Government, OMB publishes a Report on IT Spending for the Federal Government representing a governmentwide compilation of exhibit 53 data submitted by agencies across the federal government. As part of its efforts to oversee federal IT investments during the last few years, OMB has taken steps to identify IT projects that warrant additional attention by including them on either its Management Watch List and High Risk List, or both. OMB places major IT projects it considers to be poorly planned on the Management Watch List based, in part, on its detailed review of agency exhibit 300s and agencies are to submit remediation plans addressing the weaknesses identified. OMB updates the Management Watch List quarterly and projects are removed from the list as agencies remediate the weaknesses identified with these projects’ business cases. Figure 1 shows OMB’s process for developing the Management Watch List. In addition to the Management Watch List, OMB requires agencies to identify high-risk projects that require special attention from oversight authorities and the highest levels of agency management and OMB places them on its High Risk List. These projects are not necessarily at risk of failure, but may be on the list because they meet criteria specified by OMB for inclusion. Further, agency Chief Information Officers (CIO) are to assess, confirm, and document each of these projects’ performance based on whether the project was meeting one or more of four performance evaluation criteria and identify those with performance shortfalls. Figure 2 shows OMB’s process for developing the High Risk List. OMB and FSIO efforts to implement the FMLOB initiative continue to show progress and have effectively addressed 5 of the 18 recommendations and made progress toward addressing the remaining 13 recommendations we made related to four areas considered key building blocks for governmentwide financial management systems—a concept of operations, standard business processes, migration strategy, and disciplined processes. Table 1 summarizes the status of efforts to address our prior recommendations in each of these four areas. Additional information on the progress and remaining actions we believe are needed to address each recommendation can be found in appendix II. For example, OMB and FSIO have developed guidance to assist agencies’ efforts in selecting shared service providers and preparing for migration, and have taken steps to encourage agencies to embrace standard business processes that will help provide consistency as they are adopted across federal agencies. While guidance and communication-related efforts are important, OMB and FSIO efforts have not yet fully integrated any of the four key building blocks into the FMLOB implementation approach. Further, the recommendations not yet completed, in particular, involve critical elements integral to success and will require much more extensive work before the promised benefits of the FMLOB initiative can be fully realized. OMB has not completed development of a concept of operations representing the first and foremost building block on which all system planning processes as well as the remaining building blocks are built. This critical tool is essential for providing an overall road map for FMLOB efforts by describing the interrelationships among financial management systems and how information is to flow from and through them, within and across agencies, and ensuring the validity of each agency’s implementation approach. Even if FMLOB-related activities proceed as planned, efforts to address our recommendations related to this and other key concepts involve a variety of challenges which, in some cases, could take years to fully resolve. For example, according to FSIO officials, it may take as many as 15 years or more before software that incorporates the standard business processes currently under development is in use governmentwide. In addition, development of a migration timeline reflecting agencies’ commitment to migrating to shared service providers has not yet been completed. OMB officials stated that a draft migration timeline as well as a draft concept of operations have been developed and are under internal review. Until OMB finalizes these critical tools, the extent to which its efforts to date address our recommendations remains unclear. As previously reported, we believe OMB has correctly recognized that enhancing federal financial management systems needs to be addressed as a governmentwide solution, rather than individual agency stovepiped efforts designed to meet a given entity’s needs. However, given the implications of this initiative and the extended time frames involved, we emphasize the need to expedite efforts to address our remaining recommendations. Such efforts are essential to help facilitate FMLOB implementation and achieve a more effective and timely realization of benefits. Achieving the goals of the FMLOB initiative and reducing the risks associated with continuing individual agency stovepiped efforts will depend, in part, on continued strong executive leadership and commitment and the effectiveness of efforts to address our recommendations and other challenges facing this initiative. Given the far-reaching impact of the FMLOB initiative on governmentwide financial management systems, an effective governmentwide concept of operations that identifies the nature of and interrelationships among federal financial management systems is an essential tool to ensure that both system implementation and other FMLOB-related efforts achieve intended results. Although this initiative began in 2004, and we reported that efforts were under way to develop a concept of operations in our 2006 report, as shown in table 2, none of our four prior recommendations related to this area have been completely addressed. Further, developing a concept of operations was not included as a priority in OMB’s January 2008 memorandum to agency CFOs that summarized FMLOB priorities through December 2009. OMB officials stated that a draft concept of operations is in internal review; however, they did not provide us an estimated date for its completion. OMB officials stated that finalizing a concept of operations has been a challenge due to limited resources available to devote to this effort, as well as the need to ensure that the various elements of a concept of operations are appropriately linked to relevant guidance, policy documents, and requirements such as the core financial system requirements. We agree with OMB’s recognition of this need and believe it helps to illustrate the importance of finalizing this critical tool. Given the importance of articulating how the shared service provider concept fits into the overall federal financial management system framework and how systems operated at the agency and governmentwide level should be integrated, we believe efforts should be taken to expedite the completion of a clear concept of operations. OMB and FSIO officials, as well as knowledgeable officials from other selected organizations, and our recent work related to financial management system implementations, confirm the need for an effective governmentwide concept of operations to guide FMLOB efforts. For example, identifying the interrelationships among financial management systems within and across agencies would help to identify and avoid additional stovepiped efforts designed to meet their unique needs when common solutions to address their common needs are more effective. A clear understanding of the flow of information from and through these systems is also needed to ensure that the FMLOB initiative goal of providing accurate and timely data for decision making is achieved. The federal government is one of the largest and most complex organizations in the world and its agencies use a variety of financial management systems and other systems that interrelate with them to meet their needs. As a result, defining the nature and scope of the systems involved in transformation initiatives, such as FMLOB, is an important aspect for ensuring that efforts are properly aligned and focused toward meeting clearly articulated goals. Officials at the Department of Defense’s (DOD) Business Transformation Agency considered this a critical element of the lessons they learned in achieving progress toward developing a framework for DOD efforts to transform a multitude of business systems to better meet its financial management needs. We concur with this assessment and, as we testified in February 2008, we believe DOD is making progress toward establishing a framework to guide its business transformation efforts. While we are in broad agreement with the goals of OMB’s FMLOB initiative, it appears that OMB is not looking broadly enough as it frames its efforts. According to OMB and FSIO officials, FMLOB-related efforts are initially focused on addressing agency core financial systems needs and therefore may not currently fully address the existing interrelationships between core financial systems and the financial portion of mixed systems. Recent revisions to OMB’s Circular No. A-127 issued in January 2009 confirm our concerns that the importance of these interrelationships is not adequately incorporated into OMB’s approach. Specifically, OMB’s revised guidance states that federal financial management system requirements for determining substantial compliance with FFMIA include computer security requirements and internal controls as well as FSIO core financial system requirements but explicitly do not include the existing financial management systems requirements related to mixed systems. Due to the magnitude of efforts and challenges associated with modernizing financial management systems across government, knowledgeable officials at other selected organizations we spoke with stated that focusing on addressing agency core financial system needs first may be appropriate. Nonetheless, an essential part of developing an effective, comprehensive concept of operations includes identifying the interrelationships between core financial systems and other systems, such as payroll or inventory systems, which perform financial functions. In addition, agencies are increasingly considering the use of large, complex, and costly enterprise resource planning (ERP) programs to provide an integrated solution for addressing both financial and mission-related business needs. DOD, in particular, has been making significant investments in a number of ERPs to take advantage of the enterprisewide features that address various financial management and other business needs. We have reported that, as envisioned, DOD’s Navy ERP program is expected to cost approximately $2.4 billion over its 20-year life cycle and to be fully operational in fiscal year 2013. As we previously reported, a concept of operations should have a clear definition and scope of the financial management activities to be included and identify the interrelationships of core financial and other systems such as ERPs. The ability to properly align governmentwide and agency efforts also depends, in part, upon the availability of effective concepts of operations at the governmentwide level as well as the agency level. We have reported the lack of adequate concepts of operations associated with agency financial management system projects, including selected projects at the Army, the Department of Homeland Security (DHS), and the Department of the Treasury. For example, in connection with the Army’s efforts to achieve total asset visibility, we reported that, without a concept of operations, the Army is hindered in its ability to apply an enterprise view in (1) making decisions as to how certain systems will individually and collectively enhance the Army’s asset accountability and (2) determining what changes are needed in its related business processes. As a result, we also reported that the Army failed to take advantage of business process reengineering opportunities, perpetuating the use of some of its cumbersome and ineffective business processes used in existing legacy systems. Finally, participants at a Comptroller General’s forum held in December 2007 on improving federal financial management systems confirmed our concerns regarding the need for a concept of operations, pointing out that OMB’s various lines of business initiatives are serving to preserve existing stovepipes. For example, participants said it is unclear why separate lines of business are needed for budget and financial management. OMB officials stated that FSIO has been working with OMB staff knowledgeable of the federal enterprise architecture (FEA) to better understand and document the relationships between mixed and core financial systems as well as communicate with the various line business initiatives and help ensure they are effectively coordinated. Adopting standardized processes is a fundamental step needed for all financial management system implementations. Recognizing the importance of this step in connection with implementing the FMLOB initiative, we made five recommendations, as shown in table 3, related to identifying, defining, and implementing standard business processes to help facilitate greater efficiency and consistency, lower the cost, and improve the quality and performance of financial management operations across government. OMB and FSIO efforts have effectively addressed two of our five recommendations by encouraging agencies to embrace, and requiring shared service providers to adopt, standard business processes in support of the FMLOB initiative. For example, in a July 2008 memorandum, OMB encouraged the federal financial management community to begin preparations for adopting standard business processes by taking several actions, including using such processes as a framework for system implementation projects. Much work remains before the standard business processes needed to realize the goal of optimizing financial management practices across government become operational. According to FSIO officials, the process of developing the first set of standard business processes and incorporating them into software products certified as meeting FSIO core financial system requirements may take up to 3 years to complete under existing plans. We also recognize that incorporating standard business processes into operational systems will be a much longer-term effort since OMB is not requiring agencies to consider migrating to a shared service provider until upgrading to the next major release of their core financial systems, and adoption of these standards is not required until migration occurs. Accordingly, FSIO officials stated it may take up to 15 years to incorporate the standards currently under development into software, subsequently test and certify the software, and implement the certified software governmentwide. According to OMB officials, this approach reflects OMB’s recognition of the long-term nature of agency modernization efforts and the need to provide agencies time to adequately assess FMLOB migration risks. Due to the wide array of current business processes in use across agencies to address common and agency-specific needs, OMB and FSIO officials acknowledge that developing standard business processes that can be used across all federal agencies is a significant challenge. Thus far, their efforts to increase standardization have resulted in the development and issuance of three standard business processes, and OMB expects two more to be finalized by September 2009. In a January 2008 memorandum to agency CFOs, OMB acknowledged that efforts during the transparency and standardization stage of the FMLOB initiative have taken longer than expected. However, OMB added that the additional time has allowed for the preparation of more comprehensive material and greater buy-in and support for the initiative. Nonetheless, expediting efforts to address our prior recommendations related to standard business processes is essential since the ability to operationalize these standards, and begin realizing their benefits, depends on their completion. The extended time frame for implementing the FMLOB initiative involves other challenges, such as responding to changes in stakeholder needs or new financial reporting requirements. For example, FSIO officials stated that financial management systems currently used to compile and report financial information on a governmentwide level will face unique transition-related challenges as agencies begin to use systems that incorporate the recently developed common governmentwide accounting classification structure and FMLOB-compliant standard business processes. Specifically, modernization efforts under way at Treasury will need to ensure that certain centralized systems will receive, process, report, and transmit financial data to and from these agencies’ systems. In addition, these centralized Treasury systems will need to continue to interface with and convert information received from agency legacy systems to ensure the overall consistency of consolidated information used for government financial reporting and other purposes. To ensure that these issues are properly identified and managed during the transition period, FSIO officials stated that they are working with Treasury data architects to facilitate the data standardization effort and develop a joint plan that includes Treasury system update milestones. However, these challenges and the risks associated with agency legacy systems that produce financial management information using inconsistent business processes will continue until the standardization envisioned by the FMLOB initiative is actually implemented across the federal government. Recognizing the historical tendency for agencies to view their needs as unique and resist standardization, we made five recommendations, as shown in table 4, related to developing a strategy for ensuring that agencies are migrated to a limited number of shared service providers. OMB has effectively addressed two of these recommendations, including developing guidance to assist agencies in their migration efforts. In addition, OMB has taken steps toward addressing the remaining three recommendations in this area related to developing a migration strategy, articulating a clear goal and criteria for ensuring that agencies are migrated, and developing a timeline, or migration path, for when agencies should migrate to a shared service provider. However, efforts to develop such a timeline are taking longer than expected and this important tool has not yet been finalized. Until a reliable, detailed timetable for migrations across the federal government is developed, the ability to assess when governmentwide migrations will be completed remains limited. As previously noted, we plan to address key issues related to OMB’s migration strategy in the second phase of our work and therefore are deferring an assessment of OMB’s efforts in this area. Specifically, we plan to review the implementation of OMB’s strategy at shared service providers and agencies involved in migration activities during the next phase of our work. OMB’s Competition Framework for FMLOB Migrations (Competition Framework) and Migration Planning Guidance, provided important guidance to agencies to support and facilitate shared service provider migration activities. This guidance includes principles agencies must use when acquiring new financial management systems and best practices for managing organizational changes and developing effective change management strategies to ensure that migrations achieve intended results. Agencies are required to comply with OMB’s stated migration strategy and OMB relies, in part, on information agencies provide with their budget submissions to ensure they are planning their migration activities accordingly. In addition, OMB officials stated that they hold meetings with agencies to discuss this and other information regarding FMLOB-related activities such as the life cycle of existing agency financial management systems, IT investment plans, and ongoing migration activities. While we plan to perform an in-depth analysis of OMB’s strategy as part of our follow-on work, we found that additional efforts are needed to develop a timeline for agency migrations, as well as efforts to continue refining and developing additional tools to facilitate the effectiveness of agency efforts. A migration timeline reflecting agencies’ IT investment plans that are aligned with existing financial management system life cycles and their commitment toward migrating their financial management systems to shared service providers would help to ensure that agencies do not continue developing and implementing their own stovepiped systems. Such a timeline would provide greater assurance that the migrations will actually occur as planned and help guide and assess governmentwide progress. OMB officials told us they are working with agencies to develop an overall migration timeline and expected to have it in place by the end of 2008. However, this important tool has not yet been finalized and OMB could not provide an estimated completion date. As a result, the reliability of targets reported by OMB for migrating agencies, including its February 2008 estimate that many migrations are expected through 2015, is unclear. In addition to a migration timeline, FSIO and OMB officials acknowledged that agencies need additional migration guidance and tools in more specific areas that will further improve the efficiency and effectiveness of agency migration activities—such as tools for navigating the acquisition process for shared financial services, providing templates for developing agency service-level agreements, and providing agencies with change management support and training. To help reduce the risks associated with financial management system implementations, we highlighted the importance of incorporating disciplined processes into implementation efforts and made four recommendations, as shown in table 5, to ensure that they are more effectively used to properly manage and oversee specific projects. OMB has issued guidance, such as the Competition Framework and the Migration Planning Guidance, which effectively addresses our recommendation to provide a standard set of practices to guide migrations from legacy systems to new systems and shared service providers. Additional efforts are needed to fully address the remaining three recommendations in this area. OMB officials expressed the belief that existing guidance provides sufficient descriptions and requirements to agencies involved in federal IT capital investment projects and system implementations regarding the use of disciplined processes. Further, they stated that additional guidance is not needed since agencies will be migrating to an established shared service provider with a proven track record and would therefore incorporate the disciplined processes used by the provider, which would reduce or eliminate the traditional project management tasks associated with system implementations. Although the use of such providers may help reduce risks related to core financial system migrations, this position does not address the need for more effective guidance to clearly communicate the extent to which agencies are required to ensure that disciplined processes are incorporated into all financial management system implementations. Our review of OMB guidance indicates that its existing guidance does not adequately define specific disciplined processes nor adequately specify agency requirements concerning their use in connection with financial management system implementations. For example, our analysis of OMB guidance related to requirements management, risk management, data conversion, and testing activities that agencies should follow during system implementations shows that the guidance describes the purpose and high-level descriptions of these activities, but does not adequately describe and provide sufficient guidance regarding the methods agencies could use to incorporate certain critical disciplined processes into their implementation efforts. For example, sound requirements management processes, in part, should ensure that requirements are stated in clear terms that allow for quantitative evaluation and traceability among various requirements documents. With regard to traceability, OMB guidance states that “a complete set of requirements that maintain traceability throughout the Design, Development and Testing phases will contribute to the system’s success.” However, this and other OMB guidance does not provide detailed guidance on how agencies are to ensure traceability is to be attained (e.g., through the use of a requirements traceability matrix) nor does it include specific guidance requiring test plans to include links to the specific requirements they address. For data conversions, OMB guidance does not address the need to consider specific issues that apply uniquely to converting data as part of the replacement of a financial system, such as identifying specific open transactions and balances to be established through automated or manual processes, as well as using different conversion options for different categories of data. Data conversion issues can also result in problems beyond financial reporting such as those we previously reported in June 2005 in connection with the Army’s implementation of its Logistics Modernization Program (LMP) involving excess items being ordered and shipped to one of its depots. Specifically, we noted that three truckloads of locking washers (for bolts) were mistakenly ordered and received, and subsequently returned, because of data conversion problems. Further, the guidance does not specifically address or require agencies to incorporate characteristics typically found in successful disciplined testing efforts, such as processes that ensure test results are thoroughly inspected and test cases that include exposing the system to invalid and unexpected conditions. Without specific guidance on the use of these and other disciplined processes during financial management system implementations, agency projects may not achieve their intended results within established resources (costs) and on schedule. In addition to guidance, officials at OMB, FSIO, and other organizations cited challenges associated with the lack of appropriate resources to ensure disciplined processes are implemented in connection with financial management system projects. For example, officials at FSIO and DOD’s Business Transformation Agency told us that agencies do not always maintain or involve internal staff with appropriate system implementation and business process expertise needed to ensure successful implementations. Further, according to OMB officials, OMB’s ability to perform detailed implementation oversight reviews on all financial management system projects continues to be hampered due to the limited staff available to perform them. Although we recognize this challenge, we continue to believe that proper oversight should entail verification that disciplined processes are, in fact, incorporated into these projects in order to maximize their likelihood of success. As we previously reported, requiring agencies to have their financial management system projects undergo independent verification and validation reviews could provide an alternative means for ensuring agencies are incorporating disciplined processes into these projects. According to OMB officials, they do not need to require agencies to use independent verification and validation as a tool because most large agencies are already using independent verification and validation contractors to monitor large system implementations. In addition, OMB officials said they do not believe it would be appropriate to require all system implementations to use independent verification and validation contractors since they may not be cost-justified on smaller, less complex projects. OMB officials stated that they rely, in part, on activities OMB performs in connection with assessing projects for inclusion on its Management Watch List and High Risk List to identify projects having implementation risks needing further attention. As described in more detail in the next section of this report, while Management Watch List and High Risk List related activities are designed to identify planning and performance deficiencies and provide useful information to assist OMB in monitoring IT modernization projects, they do not provide for an adequate assessment of the extent to which agencies are incorporating disciplined processes to better manage financial management system modernization projects. Further, we continue to believe that verifying that projects adequately incorporate disciplined processes, whether performed by an independent verification and validation contractor or otherwise, is an essential aspect of effectively overseeing financial management system implementation projects to ensure the risks associated with these projects are managed to acceptable levels. FMLOB implementation efforts are affected by other broad and crosscutting issues related to the overall federal financial management environment such as ensuring the availability of sufficient resources and federal financial management human capital strategies, and addressing the myriad of weaknesses in existing systems across federal agencies. Given the potential far-reaching impact of the FMLOB initiative on governmentwide financial management systems, continued strong commitment and leadership is essential to ensure that progress continues and the FMLOB goals are achieved. As we recently reported, the federal government is taking unprecedented actions to restore stability to the financial markets that will likely have a significant effect on the federal government’s financial condition. As our nation works through these and other fiscal challenges, difficult choices and trade-offs involving the use of significant resources will be unavoidable. The knowledgeable officials at OMB, FSIO, and other organizations we spoke with generally agree that securing the resources needed to achieve FMLOB initiative goals will be an ongoing challenge. Similarly, the officials we spoke with generally agreed that agencies face challenges associated with skills, knowledge, and experience imbalances in their workforce which, without corrective action, are expected to worsen in light of anticipated retirements of federal civilian workers in coming years. In this respect, our work at Treasury, DOD, DHS, and other agencies has confirmed that problems associated with strategic workforce planning, human resources, and change management have hampered financial management operations and system implementations and help to illustrate that the federal financial management workforce supporting the business needs of today is not well positioned to meet the needs of tomorrow. Participants at a Comptroller General’s forum suggested that federal financial management human capital strategies could be better focused on attracting and retaining a new technology-savvy generation of financial professionals. However, FSIO officials noted that they believe the FMLOB-related efforts to standardize business processes, operate financial management systems through shared service solutions, and provide training materials and change management support will help mitigate the growing shortage of federal financial management human capital. As we previously reported, effective human capital management is critical to the success of systems implementations and the extent to which these and other efforts will lead to having staff with the appropriate skills is key to achieving financial management improvements. In addition, in connection with our efforts to report annually on the implementation status of FFMIA, we continue to report that assessments for the 24 CFO Act agencies illustrate that agencies still do not have effective financial management systems, including processes, procedures, and controls in place that can routinely produce reliable, useful, and timely financial information that federal managers can use for day-to-day decision-making. Further, problems at some agencies, such as DOD and DHS, are so severe and deep-rooted that we have designated their transformation efforts as high risk due to financial management and business practices that adversely affect their ability to control costs, ensure basic accountability, measure performance, and meet other financial management needs. Against the backdrop of our nation’s long- term fiscal imbalance, addressing these issues represents key challenges to fully realizing the world-class financial management anticipated by Congress through the enactment of federal financial management reform legislation as well as FMLOB initiative goals. Given the broad spectrum of challenges associated with modernizing federal financial management systems, strong leadership and commitment of OMB, FSIO, and other key FMLOB stakeholders are especially important to ensure that needed improvements are achieved. Knowledgeable officials from the other selected organizations we interviewed generally agreed that the success of the FMLOB initiative will depend, in part, on OMB’s ability to lead the multifaceted efforts of many stakeholders toward achieving effective, common, financial management system solutions over a long period of time. We concur with this position and believe additional attention and efforts toward addressing our prior recommendations, as well as continuing careful consideration of the significant challenges, will serve to facilitate the implementation of this important initiative. Since 2005, we have made various recommendations to OMB aimed at improving its oversight of agency financial management system modernization and other IT projects. OMB has yet to take sufficient actions to fully address these recommendations, despite the critical role of OMB oversight, established in various statutes, in helping to ensure the success of agency modernization efforts. In addition, OMB has yet to resolve challenges we previously reported on the need to capture the costs of all financial management system investments in order to better evaluate agency modernization efforts. Achieving FMLOB goals requires effective OMB oversight of agency modernization projects. Until the weaknesses we previously reported are fully addressed, the FMLOB initiative and agency financial management system modernization efforts remain at increased risk of not meeting their intended goals. Although OMB has taken steps to address some of the oversight-related recommendations we have made since 2005, it has yet to fully address them. For example, OMB has updated the criteria used to identify high-risk projects and issued various guidance such as the Migration Planning Guidance issued in September 2006 that provides useful instruction to agencies on managing system modernization projects as well as the risks associated with migrating to shared service providers. However, OMB has not yet fully addressed our prior recommendations aimed at maximizing the use of the Management Watch List and High Risk List as tools that facilitate its oversight and review of IT projects including financial management system modernization efforts. Further, as indicated in the previous section of this report, OMB has not yet fully addressed our prior recommendations related to disciplined processes including defining and providing specific guidance to agencies on disciplined processes, developing processes to facilitate oversight and review of agencies’ financial system implementation projects, and ensuring that agencies effectively implement disciplined processes. OMB oversight efforts include assessing financial management system and other IT investments using specific criteria to evaluate business cases and determine whether they represent high-risk projects. OMB includes agency projects warranting additional oversight and management attention based on these assessments in its quarterly Management Watch List and High Risk List. While OMB has taken steps to more effectively use the Management Watch List and High Risk List as oversight tools, additional actions are needed to fully address our prior recommendations and further improve its oversight of agency IT projects. For example, although OMB performed governmentwide and agency-specific analyses of Management Watch List projects’ deficiencies in 2008, it needs to continue to use this list to prioritize projects needing follow-up and to report to Congress on management areas needing attention. In addition, OMB has yet to publicly disclose the deficiencies, if any, associated with projects included in the High Risk List. Disclosing these deficiencies would allow OMB and others to better analyze the reasons projects are poorly performing, identify management issues and other root causes that transcend individual agencies, and evaluate corrective actions. Further, OMB’s criteria for assessing projects and determining which are to be included on these quarterly lists does not adequately address the need to assess whether agencies have, in fact, implemented the necessary disciplined processes to help ensure their success. As previously discussed, OMB officials stated that their reviews of agency financial management system modernization projects do not generally focus on the extent to which agencies are following disciplined processes and that OMB does not have sufficient resources to conduct such reviews. According to OMB officials, its reviews of financial management systems and related modernization efforts focus primarily on agencies’ compliance with the requirements of FFMIA and ensuring that effective remediation plans are developed and implemented to address identified FFMIA deficiencies. Reviewing these projects to monitor whether FFMIA deficiencies are addressed is important; however, such efforts do not provide adequate assurance that agencies are using disciplined processes to manage their projects. Such assurance is critical since our work and that of others has shown that agency modernization failures have often been due, in part, to not adhering to disciplined processes during system implementation efforts. Until the weaknesses we previously reported are fully addressed, the FMLOB initiative and agency financial management system modernization efforts remain at increased risk of not meeting their intended goals. In 2006, we reported that one of the key challenges OMB faces when evaluating financial management system modernization efforts is capturing all financial management system investments and their related costs. Capturing and reporting useful spending information continues to be a challenge due, in part, to the way in which agencies categorize projects according to existing OMB guidance. As a result, the ability to fully consider the risks associated with financial management system modernization projects and more effectively focus oversight activities is adversely affected. In April 2008, OMB reported that agencies planned to spend $925 million on financial management systems modernizations for fiscal year 2009. However, the methodology OMB used to report this overall governmentwide estimate did not provide a complete and accurate measure of spending on these projects. Specifically, agencies are required to indicate certain FEA categories that each project relates to in connection with their exhibit 53 submissions. While OMB’s estimate of agencies’ planned spending includes amounts related to five of these categories, it does not take into account certain types of mixed systems that support financial management activities, such as those related to supply chain management. For example, even though DOD’s Navy ERP project is a business system with many integrated financial management functions, OMB’s estimate did not include any of the $112 million planned to be spent on this project in fiscal year 2009 because it was identified as a supply chain management project. For projects involving mixed systems such as the Navy ERP, OMB guidance requires agencies to provide the percentage of planned spending on projects associated with the financial portion of these systems related to their budget request for the next fiscal year. However, such percentages were not incorporated in the methodology for estimating planned agency spending on financial management systems for fiscal year 2009. In addition, OMB guidance does not require agencies to specify the amount that was actually spent on the financial portion of mixed system projects in prior and current years. Further, on the basis of our review of spending data for two selected agencies, the reliability of information reported by agencies is unclear. Specifically, these two agencies interpreted OMB’s guidance differently and, as a result, used inconsistent methodologies for determining the percentages they reported. OMB officials informed us that they are reviewing the guidance related to estimating financial management system percentages to determine whether additional data or clarifications are needed. OMB officials also stated that they were uncertain as to whether focusing significant efforts in this area would provide useful information or be an appropriate use of resources that should be focused on potentially more important priorities. We agree that managing and evaluating mixed system projects in many cases may not involve focusing on the financial portion of mixed systems on a stand-alone basis. However, OMB’s current processes for obtaining and reporting agency spending on financial management system modernization efforts does not provide sufficient information to facilitate an adequate evaluation of their financial risks. An effectively designed risk-based approach for focusing limited financial management oversight resources should take into consideration the relative risks associated with all modernization projects that support financial management functions. Further, focusing efforts on helping to ensure the success of large mixed system projects that involve significant financial management-related portions versus other less costly financial management system modernization projects may be a prudent course of action and may help justify the need for additional resources to address the risks they represent. Spending data highlighting the investments being made on different types of financial management system modernization projects, including core financial systems and mixed systems with significant financial management components, would help efforts to evaluate the relative magnitude of—and risks associated with—agency efforts in these areas. Until OMB efforts to obtain and report spending on financial management system modernization projects and related guidance take into account the need for information to better evaluate the relative risks associated with these investments, the ability to effectively align oversight activities based on these risks will be adversely affected. OMB’s FMLOB initiative represents an important step toward improving the outcome of financial management system modernization efforts so that agencies have systems that generate reliable, useful, and timely information for decision-making purposes. Although OMB continues to make progress in addressing our prior recommendations to help ensure the success of this initiative, much work remains. Specifically, 13 of the 18 recommendations we made on integrating four key building blocks into FMLOB implementation efforts have yet to be fully addressed. Without an effective concept of operations providing the foundation to guide FMLOB- related activities, efforts to modernize federal financial management systems are at an increased risk of not fully achieving their goals. Further, addressing many of our recommendations will require extensive work to complete remaining development activities and, more importantly, actually place them into operation to achieve the federal financial management framework envisioned. In addition, despite its critical role in overseeing agency financial management systems modernization efforts, OMB has not yet fully addressed our oversight-related recommendations, including assessing whether agencies have incorporated disciplined processes into their modernization efforts, fully using its Management Watch List and High Risk List to more effectively oversee projects, and reporting to Congress. Across the federal government, agencies have financial management system modernization efforts under way and the success of these efforts will depend on OMB’s and agencies’ efforts to ensure that disciplined processes are effectively used to help reduce the risk of system implementation failures. Therefore, we reaffirm the need for OMB to expedite its efforts to fully address the recommendations we have made in prior reports, including those dealing with specific oversight procedures to minimize their associated risk. OMB efforts to obtain and report information on how much agencies spend on modernizing federal financial management systems do not enable it or Congress to adequately understand and evaluate the risks associated with such projects. Consistent and diligent OMB commitment toward oversight, including efforts to incorporate appropriate spending data, will be critical to the overall success of efforts to modernize federal financial management systems. To assist oversight efforts specifically related to federal financial management systems, we recommend that the Director of OMB take actions to facilitate complete and accurate reporting of actual and planned spending related to financial management system modernization projects, including the financial portion of mixed systems that significantly support financial management functions, and make necessary changes in existing guidance to meet these needs. We received written comments from the Deputy Controller of OMB on a draft of this report (these comments are reprinted in their entirety in app. III). In its comments, OMB generally agreed with our recommendation to facilitate complete and accurate reporting of actual and planned spending related to financial management system modernization projects and described actions being taken to address this recommendation. OMB also provided technical comments on a draft of this report that we incorporated as appropriate. In its comments, OMB expressed concern with part of our recommendation directed at better capturing cost information specifically related to the financial portion of mixed systems and stated that it is evaluating the need for such information. According to OMB, its preliminary analysis shows that breakouts of this cost data would have limited value for decision making because such a distinction is highly subjective and would not likely change agencies’ investment decisions. OMB did not provide the preliminary analysis for our review. OMB believes it would be more cost-effective to focus its resources on other, higher risk areas, such as finalizing the concept of operations. However, as discussed in our report, the resources devoted to the financial portion of mixed systems are significant and, although determining the amount of such resources may be subjective, we believe more effective OMB guidance and oversight could further improve the accuracy, consistency, and usefulness of such information. The implementation of mixed system projects is critical because these systems provide input to the core financial system and in some cases are the sole source of data needed by management to make informed decisions. OMB needs such cost information to effectively evaluate the risks associated with financial management system modernization projects, including mixed systems, thus ensuring that its oversight efforts are properly aligned to focus on those projects needing increased attention. We are sending copies of this report to the Ranking Member, Subcommittee on Federal Financial Management, Government Information, Federal Services, and International Security, Senate Committee on Homeland Security and Governmental Affairs, and the Chairman and Ranking Member, Subcommittee on Government Management, Organization, and Procurement, House Committee on Oversight and Government Reform. We are also sending copies to the Director of OMB and Director of FSIO. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Kay Daly, Director, Financial Management and Assurance, who may be reached at (202) 512-9095 or [email protected], or Naba Barkakati, Chief Technologist, Applied Research and Methods, who may be reached at (202) 512-2700 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine the Office of Mana toward addressing our prior recommend management line of business (FMLOB) initiative, we reviewed relevant OMB and Financial Systems Integration Office (FSIO) policies, guidance, reports, and memorandums related to actions taken and actions remaining and interviewed key OMB and FSIO officials, including senior officials in OMB’s Office of Federal Financial Management (OFFM) and Office of Electronic Government and Information Technology (E-Gov and IT). We also reviewed laws and regulations related to the FMLOB initiative and relevant prior GAO reports to identify and assess the risks and challenges associated with implementing the FMLOB initiative. (See the Related GAO Products list at the end of this report.) In addition, to obtain views on challenges related to implementing the FMLOB initiative, we interviewed OMB and FSIO officials as well as other officials from organizations involved in large business transformation initiatives and knowledgeable of federal financial management system improvement efforts and reviewed relevant reports from these organizations including the Financial Standards and Processes Division within the Department of Defense Business Transformation Agency, the Association of Government Accountants, and the National Academy of Public Administration. gement and Budget’s (OMB) progress ations related to the financial To determine how effective OMB monitors FMLOB and financial management system modernization projects, including those reported on its Management Watch List and High Risk List, we reviewed our prior reports specifically related to OMB efforts to improve the identification and oversight of projects on these lists and interviewed senior OMB OFFM and Office of E-Gov and IT officials on the nature and extent of efforts to monitor financial management system and other IT projects. To assess OMB’s efforts to monitor agency spending on FMLOB and financial management system modernization projects, we reviewed and analyzed reports and data provided by OMB and selected agencies related to agency spending on IT projects. In assessing the reliability of spending amounts reported by agencies, we (1) reviewed relevant OMB policies, guidance, reports, and memorandums, (2) reviewed spending data submitted by agencies to OMB on their Agency IT Investment Portfolio (exhibit 53) as required by OMB Circular No. A-11, Section 53, and (3) interviewed senior OMB OFFM officials to gain an understanding of their efforts to collect, analyze, and report agency spending on financial management system projects. In addition, we identified six agencies that reported the largest amounts of fiscal year 2007 spending for financial management-related modernization projects and interviewed officials from two of these agencies knowledgeable of efforts related to preparing and submitting agency exhibit 53s to OMB and whose reported fiscal year 2007 spending for financial management-related modernization projects represented 2 percent of total federal agency spending on such IT projects. We believ that the results of our analysis of data provided by the two agencies selected, combined with our analysis of guidance and data obtained from OMB, provide a sufficient basis for our conclusion that spending data submitted by agencies on the exhibit 53 are not reliable for purposes of accurately measuring agency spending on financial management system modernization projects. We conducted this performance audit from February 2008 through May 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit obtain sufficient, appropriate evidence to provide a reasonable basis for e our findings and conclusions based on our audit objectives. We believ that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides a summary of the progress made by OMB, in conjunction with FSIO, in addressing prior GAO recommendations related to the FMLOB initiative. In addition, this appendix provides o overall assessment and status of whether this progress fully addressed each recommendation and a summary of the remaining actions we beli are necessary to fully address those that have not yet been completed. According to OMB officials, a draft FMLOB concept of operations (ConOps) has been developed; however, it has not yet been finalized and officials would not provide an estimate for when it will be completed. According to OMB officials, t FMLOB ConOps will initially focus primarily on core financial systems at the individual agency level. While this focus is important, the development of a ConOps describing the activities, needs, and inte rrelationships of core and noncore governmentwide financial management systems would assist in providing a valuable foundation for future financial management modernization efforts. Until this critical tool is finalized, the extent to which OMB efforts to date address this recommendation remains unclear. Finalize and issue a concept of operations document that includes the following components: describes the operations that must be performed, who must perform them, and where and how the operations will be carried out; clearly defines and describes the scope of financial management activities; describes how the various elements of federal financial systems and mixed systems interrelate; describes how information flows from and through these systems; and explains how financial management systems at the agency and governmentwide levels are designed to operate cohesively. Refer to recommendation 1 describing progress related to developing a ConOps that would describe the interrelationships among federal financial management systems and how financial management operations, including those performed by shared service providers, will be carried out. In addition, OMB Circular No. A- 127, revised in January 2009, contains guidance on the use, selection, and monitoring of shared service providers. However, this revised guidance does not adequately reflect the critical interrelationships between core and noncore financial systems. Specifically, it states that noncore financial system requirements are not part of the requirements to be used for determining substantial compliance with the Federal Financial Management Improvement Act (FFMIA). This represents a significant change from prior guidance that implemented FFMIA Sections 803(a) and 806 provisions requiring that all financial management systems be evaluated to determine compliance with applicable requirements. Excluding noncore financial management systems the scope of these provisions raises significant questions on how these systems will be evaluated in the future and the level of assurance that noncore systems provide reliable, timely, and useful financial information. Refer to recommendation 1 describing remaining actions related to developing a ConOps which, among other things, describes the interrelationships among federal financial management systems and how financial management operations, including those performed by shared service providers, should be carried out. Refer to recommendation 1 describing progress related to developing a ConOps that would describe the interrelationships among federal financial management systems, including how systems operated at the agency and governmentwide levels should operate cohesively. According to an OMB January 2008 memorandum, in connection with financia management modernization efforts, federal agencies will only be permitted to acquire, and shared service providers allowed to implement, software products that are certified as meeting FSIO core financial systems requirements. Migration Planning Guidanc the services and systems offered by shared service providers on behalf of agencies. e issued in September 2006 provides additional guidance on Accordin developed to align with the Federal Enterprise Architecture (FEA) Reference Model and will provide additional clarification on the integra management systems. g to OMB officials, an FMLOB Segment Architecture is being ConOps which, among other things, would describe the interrelationships am federal financial management systems, including how systems operat agency and governmentwide levels can operate cohesively. 1 describing remaining actions related to developing a ong Ensure appropriately aligned with a comprehensive financial management system ConOps. that efforts to develop and issue an FMLOB Segment Architecture are Refer to recommendation 1 describing progress related to developing a ConOps that would clearly define and describe the scope of financial management activities and describe how the various elements of federal financial systems and mixed systems interrelate. In addition, refer to recommendation 2 describing revised OMB Circular No. A- 127 guidance on financial management systems. According to OMB officials, the FMLOB Segment Architecture is being developed to align with the FEA Reference Model and proposed changes to increase FEA alignment with OMB Circular Nos. A-11 and A-127 will be submitted to the Federal CIO Council Architecture and Infrastructure Committee. Ensure that collective efforts to define financial management systems, includ the development and issuance of the FMLOB Segment Architecture and future revisions to OMB Circular Nos. A-11 and A-127, effectively resolve inconsistencies in how they are defined in the FEA and FFMIA. OMB, in conjunction with FSIO, has made progress towards developing stan business processes, including the issuance of the following guidance: the common governmentwide accounting classification structure (July 2007); arge card data elements specification, which standardizes ch governmentwide requirements for data elements (December 2007); payment and funds management standard business processes (July 2008); and ard business process (November 2008). In addition, according to OMB’s January 2008 memorandum to agency chief financial officers (CFO), OMB projected that efforts to provide certain additional guidance on common governmentwide business standards, processes, data, and rules would be accomplished by December 2009 including: finalizing the reimbursables and reporting standard business processes; updati business standards; and ng the Core Financial Systems requirements to incorporate the identifying and beginning the development of additional standards, such interface data elements, to assist in lowering the risk and cost of implementing financial systems. Finalize and issue the business standards for reimbursables and reporting processes. Update the Core Financial Systems and noncore systems requirements to incorporate the business standards. Identify and develop additional common governmentwide business standards, processes, data, and rules, such as interface data elements, to assist in low the risk and cost of implementing financial systems. Refer to recommendation 5 describing progress toward defining and developing standard business processes. OMB, in conjunction with FSIO, issued Financial Management Systems Standard Business Processes for U.S. Government Agencies in July 2008, which describes the Standard Federal Financial Business Processes (SFFBP) intended to provide guidance for implementing effi cient core financial business processes that are consistent across government. The SFFBPs include: sequenced activities for core business processes; business rules for governing the process steps; data elements and definitions related to these business processes (e.g., information contained on an obligation such as document source and number, item number, price per item); and relationships among the data elements as they exist in the actual business activities. While the SFFBP currently provides detailed descriptions, process s flowcharts, and other guidance for the payment, funds, and recei management processes, descriptions of other standard business processes identified so far (i.e., reimbursables and reporting), as well as data objects and elements, have not yet been described. In addition, since the SFFBP focuses on core financial business processes, standard business processes associated with noncore financial business processes have not yet been described. Refer to recommendation 5 describing remaining action needed to identify and ness processes. Develop, finalize, and issue descriptions of the business standards for reimbursables and reporting processes. Identify, develop, and describe additional common governmentwide business standards, processes, data, and rules, such as interface data elements, including those needed to meet agencies’ needs associated with noncore financial business processes. OMB Circular No. A-127, revised in January 2009, requires agencies to registe approved exceptions to the standard configuration to meet their needs. According to OMB officials, OMB’s priority is to focus on governmentwide standard business processes that generally affect all agencies. Although OMB has not yet focused on developing standard business processes that meet unique agency needs, OMB must further develop its process to identify unique requirements and proceed to d designed to meet them. According to have been completed, incorporated into core financial system requirements tested during the FSIO software qualification and certification process, agencies will only be permitted to acquire, and shared service providers allowed to implement, certified products as configured with the standards. Limiting the products that shared service providers can use to those that are configured to meet standard business processes effectively addresses this recommendation. a January 2008 OMB memorandum, once business standards See recommendation 8 describ business processes into core financial system requirements and requiring service providers to only use certified products configured with the standard. In addition, this memorandum requires agencies to adopt these standards when they move to a shared service provider. ing progress related to incorporating standard In a July 2008 memorandum announcing the issuance of certain standard business processes described in recommendation 5, OM financial management community, including federal age preparations for adopting standard business processe business practices and processes, (2) gaining an understanding of the standar federal financial business processes, (3) analyzing the gap between existing s by (1) analyzing existing d and future processes, and (4) using standard business processes as a framework for system implementation projects. In addition, OM consisting of agency and other federal financial management community stakeholders to develop SFFBPs provides effective opportunities to help encourage agencies to develop and embrace standard business processes a performance measures. See recommenda shared service providers. tion 2 describing revised OMB Circular No. A-127 on the use of In a January 2008 memorandum, OMB reiterated guidance contained in the Competition Framework for Financial Management Lines of Business Mi (Competition Framework) issued in May 2006 requiring, with limited exception, an agency seeking to upgrade to the next major release of its current core financial management system or modernize to a different core financial management system to either migrate to a shared service provider or qualified private sector provider, or be designated as a shared service provider. An agency may rely on its in-house core financial management system operations without being designated as a shared service provider only if the agency demonstrates that its own operations represent a best value and lower risk alternative over the life of the investment. OMB also issued Migration Planning Guidance in September 2006 to he agencies prepare for and manage a migration of their financial management system operations to a shared service provider. According to the Migration Planning Guidance, all agencies are expected to decide whether to migrate thei technology hosting administration and application management to a shared service provider or to become a provider themselves within 10 years. To help ensure that agencies are migrating in accordance with its stated approach, according to OMB officials, OMB uses information obtained from agencies, such as the life cycles of agencies’ existing financial management systems and exhibit 300s, and through discussions specifically related to financial management systems which occur at least annually, or more frequently during active migration planning or transition activities. Although OMB has developed a migration strategy, we defer our final assessment of these actions toward addressing this recommendation until we complete a more in-depth analysis as part of our planned follow-on work. See recommendation 10 describing OMB’s progress to clearly articulate the applicability of the shared service provider concept to agencies. Although OMB has taken steps to articulate a clear goal and criteria for ensuring agencies are subject to the shared service provider concept, we defer our fi assessment of these actions toward addressing this recommendation until w complete a more in-depth analysis as part of our planned follow-on work. See recommendation 10 describing progress related to establishing a migration path or timetable. Although these efforts articulate the applicability of the shared services concept to federal agencies and represent important elements of an overall migration strategy, additional efforts are needed for an effective strategy including the establishment of clear migration timelines and processes to effectively monitor s will progress toward meeting them. OMB’s recent estimates for when agencie be migrated to shared service providers are unclear, indicating that many have been scheduled through fiscal year 2015 while some have not yet been scheduled. According to O develop a detailed migration timeline it has not yet been finalized. Until this tool is finaliz recommendation remains unclear. MB officials, although OMB has been working to ed, the extent to which OMB efforts to date address this Develop clear and measurable goals, including specific timelines for migrating shared service providers based, in part, on the life cycle of existing financial management systems. The Competition Framework issued by OMB in May 2006 provides additional guidance to help agencies select a shared service provider and requires agencies undertaking steps to acquire new financial management systems to comply with four guiding principles, including considering providers with a demonstrated capability, using a competitive process, implementing an accountability structure, and tracking results. In September 2006 OMB issued its Migration Planning Guidance designed to help agencies prepare for and manage a migration of their financial managem system operations to a shared service provider. In January 2009, OMB revise Circular No. A-127 providing additional guidance on the use of shared service providers. Migration Planning Guidance issued in September 2006 includes a section on Change Management Best Practices, which provides considerations for managing the organizational changes to facilitate the transition from an agency’s existing financial systems or operations to a shared service provider. This section inclu depth descriptions of best practices in a variety of areas that can assist agencies in developing and adopting an effective change management strategy including the role of leadership, governance, organizational structure, migration team composition, human capital management, and stakeholder and communications management. sued guidance related to disciplined processes in its Migration Planning Guidance issued in September 2006 which provides agencies with high-level guidance to manage their systems modernization projects and m migrating to shared service providers. See recommendations 8 and 10 describing progress related to requiring agencies to migrate to shared service providers and only permitting them to use the certified products as configured to meet required standard business processes. According to OMB officials, using shared service providers with proven track records will help to reduce or eliminate traditional project management tasks commonly associated with system implementations. However, additional efforts are needed to adequately define the critical elements sses needed and steps to be taken to ensure they are adequately implemented. In our review of OMB guidance on selected disciplined processes, we noted that OMB guidance does not provide in-depth information on each of the selected disciplined processes. For example, OMB guidance does not adequately address how agencies are to ensure the traceability of requirements as well as the need to consider specific issues that apply uniquely to converting data as part of the replacement of a financial system, incorporate test cases that expose the system to invalid and unexpected outcomes, a nd ensure thorough inspection of test results. In addition, OMB reviews of agen financial management systems implementations generally do not focus on implementation of the disciplined processes. the disciplined processes (i.e., requirements management, Thoroughly define testing, data conversion and system interfaces, configuration, risk and project management, quality assurance) necessary to properly manage projects. Map each of the disciplin specific be performed. ed processes to OMB guidance that contains clear and instructions requiring their use and how each disciplined process should Issue guidance specifically related to disciplined processes necessary to properly manage specific projects. Provide oversight and more structured reviews specifically related to financial management projects to ensure that disciplined processes are effectively implemented. See recommendation 15 describing progress related to specific guidance provided to agencies on disciplined processes including the issuance of Migration Planning Guidance, OMB Circular No. A-11, Part 7, and a May 25, 2007, memora most specific guidance related to financial management system implementations. However, as described in recommendation 15, additional efforts are needed to provide guidance to address the use of disciplined processes in connection with financial management system implementation ndum. Of these, the Migration Planning Guidance provides the s. See recommendation 15 describing remaining actions needed to address the use of disciplined processes necessary to properly manage specific financial management system implementation projects. See recomm Competition Framework, Migration Planning Guidance including Change Management Best Practices, and OMB’s January 2008 memorandum providing guidance for agencies planning to migrate their agency’s financial managemen t systems and services to new systems and shared service providers. endations 8, 13, and 14 describing progress related to issuing the OMB issued a variety of guidance on financial management system requirements and the implementation of IT projects that facilitates the oversight and review of financial system implementation projects. OMB uses information obtained from agencies such as the life cycles of agencies’ existing financial management systems, exhibit 300s, and throu discussions specifically related to financial management systems implementation projects which occur at least annually, or more frequently during active migration gh planning or transition activities. During the budget formulation process, OMB analyzes information related to agency financial management system and other IT projects and identifies those warranting additional attention on its Management Watch List and High Risk List. These efforts represent important aspects of OMB’s oversight of financial management system implementation projects. However, OMB has not developed a structured process to facilitate its overall oversight efforts related to these projects. In addition, OMB does not adequately capture spending specifically related to financial management system modernization projects, which limits its ability to fully consid the although OMB has tak poorly performing projects, additional efforts are needed to address prior recommendations to improve the planning, management, and oversight of these financial risks associated with these efforts. Als o, we recently testified that en steps to improve the identification of poorly planned and projects. Finally, the extent of problems related to financial management system implementation projects that continue to be reported indicate the need for additional oversight efforts designed to further identify and prevent failures in the future. Enhance existing oversight efforts to improve financial management syste implementations by developing a structured process to identify and evaluate specific and systemic implementation weaknesses and risks specifically related to financial management system modernizations, including those associated with projects on the Management Watch List and High Risk List and others identified through reviews of agency provided information, as well as their costs, and discussions with agency officials; implementing processes to ensure that agencies more effectively and consistently comply with guidance related to implementing financial management system modernization projects, including the use of disciplined processes to reduce the risk of implementation failures; and clarifying guidance so that agencies consistently report planned and spending related to financial management system modernization projects including the financial portion of mixed systems. (a), agencies are reuired to implement and maintain financial management systems that comply substantially with federal financial management systems reccounting standards, and the United States Government Standard irements, applicable federal a General Ledger at the transaction level. port personnel dedicated to the operation and maintenance of system functions; ’’ includes an information system, comprised of one or more applications, whi ch is According to OMB, segment architecture defines a simple road map for a core mission area, business service, or enterprise service that is driven by business management and delivers products that improve the delivery of services to citizens and agency staff. From an investment perspective, segment architecture drives decisions for a business case or group of business cases supporting a core mission area or common or shared service. ording to OMB, the FEA consists of a set of interrelated “reference models” designed to facilitate s-agency analysis and the identification of duplicative investments, gaps, and opportunities for boration within and across agencies. Collectively, the reference models comprise a framework escribing important elements of the FEA in a common and consistent way. B, Memorandum, Update on the Financial Management Line of Business (Washington, D.C 2008). .: Jan. O, Financial Management Systems Standard Business Processes for U.S. Government Agencie shington, D.C.: July 18, 2008). B, Memorandum, Update on the Financial Management Line of Business (Washington, D.C.: B, Memorandum, Update on the Financial Management Line of Business (Washington, D.C.: Ja n. B Memorandum, Use of Performance-Based Management Systems for Major Acquisition (Washington, D.C.,: May 25, 2007). 1, 2008). Individuals making major contributions to this report were Chris Martin, Senior-Level Technologist; Michael LaForge, Assistant Director; Sabine Paul, Assistant Director; Latasha Brown; Francine DelVecchio; Jim Kernen; Patrick Tobo; and Leonard Zapata. Information Technology: Management and Oversight of Projects Totaling Billions of Dollars Need Attention. GAO-09-624T. Washington, D.C.: April 28, 2009. inancial Management: Persistent Financial Management Systems F Issues Remain for Many CFO Act Agencies. GAO-08-1018. Washington, D.C.: September 30, 2008. Better Define and Information Technology: Treasury Needs to Implement Its Earned Value Management Policy. GAO-08-951. Washington, D.C.: September 22, 2008. DOD Business Systems Modernization: Important Management Controls Being Implemented on Major Navy Program, but Improvements Needed in Key Areas. GAO-08-896. Washington, D.C.: September 8, 2008. Information Technology: Agencies Need to Establish Comprehensive Policies to Address Changes to Projects’ Cost, Schedule, and Performance Goals. GAO-08-925. Washington, D.C.: July 31, 2008. Information Technology: OMB and Agencies Need to Improve Planning, Management, and Oversight of Projects Totaling Billions of Dollars. GAO-08-1051T. Washington, D.C.: July 31, 2008. Fiscal Year 2007 U.S. Government Financial Statements: Sustained Improvement in Financial Management Is Crucial to Improving Accountability and Addressing the Long-Term Fiscal Challenge. GAO-08-926T. Washington, D.C.: June 26, 2008. Fiscal Year 2007 U.S. Government Financial Statements: Sustained Improvement in Financial Management Is Crucial to Improving Accountability and Addressing the Long-Term Fiscal Challenge. GAO-08-847T. Washington, D.C.: June 5, 2008. Highlights of a Forum Convened by the Comptroller General of the United States: Improving the Federal Government’s Financial Management Systems. GAO-08-447SP. Washington, D.C.: April 16, 2008. Defense Travel System: Overview of Prior Reported Challenges Faced by DOD in Implementation and Utilization. GAO-08-649T. Washington, D.C.: April 15, 2008. Defense Business Transformation: Sustaining Progress Requires Continuity of Leadership and an Integrated Approach. Washington, D.C.: February 7, 2008. GAO-08-462T. e Homeland Security: Responses to Posthearing Questions Related to th Department of Homeland Security’s Integrated Financial Management Systems Challenges. GAO-07-1157R. Washington, D.C.: August 10, 2007. Financial Management: Long-standing Financial Systems Wea Present a Formidable Challenge. GAO-07-914. Washington, D.C.: 2007. Cost Assessment Guide: Best Practices for Estimating and Managing Program Costs. GAO-07-1134SP. Washington, D.C.: July 2007. DOD Business Transformation: Lack of an Integrated Strategy Puts the Army’s Asset Visibility System Investments at Risk. GAO-07-860. Washington, D.C.: July 27, 2007. Business Modernization: NASA Must Consider Agencywide Needs to Reap the Full Benefits of Its Enterprise Management System Modernization Effort. GAO-07-691. Washington, D.C.: July 20, 2007. Managerial Cost Accounting Practices: Implementation and Use Vary Widely across 10 Federal Agencies. GAO-07-679. Washington, D.C.: July 20, 2007. Homeland Security: Transforming Departmen Management Systems Remains a Challenge. GAO-07-1041T. Washington, D.C.: June 28, 2007. Managerial Cost Accounting Practices at the Department of Int GAO-07-298R. Washington, D.C.: May 24, 2007. erior. DOD Business Systems Modernization: Progress Continues to Be Made in Establishing Management Controls, but Further Steps Are Needed. GAO-07-733. Washington, D.C.: May 14, 2007. Information Technology: DHS Needs to Fully Define and Implemen Policies and Procedures for Effectively Managing Investments. GAO-07-424. Washington, D.C.: April 27, 2007. Fiscal Year 2006 U.S. Government Financial Statements: Sustained Improvement in Federal Financial Management Is Crucial to Addressing Our Nation’s Accountability and Fiscal Stewardship Challenges. GAO-07-607T. Washington, D.C.: March 20, 2007. Federal Financial Management: Critical Accountability and Fiscal Stewardship Challenges Facing Our Nation. GAO-07-542T. Washington, D.C.: March 1, 2007. Defense Business Transformation: A Comprehensive Plan, In Efforts, and Sustained Leadership Are Needed to Assure Success. GAO-07-229T. Washington, D.C.: November 16, 2006. ed Savings Are Questionable and Defense Travel System: Estimat Improvements Are Needed to Ensure Functionality and Increase Utilization. GAO-07-208T. Washington, D.C.: November 16, 2006. Financial Management: Improvements Under Way but Serious Financial Systems Problems Persist. GAO-06-970. Washington, D.C.: September 26, 2006. Defense T Implementation Challenges Remain. GAO-06-980. Washington, D.C.: September 26, 2006. ravel System: Reported Savings Questionable and Managerial Cost Acc the Department of Housing and Urban Development. GAO-06-1002R. Washington, D.C.: September 21, 2006. ounting Practices: Department of Agriculture and Depar Financial and Business Management Transformation. GAO-06-1006T. Washington, D.C.: August 3, 2006. tment of Defense: Sustained Leadership Is Critical to Effective Information Technology: Agencies and OMB Should Strengthen Processes for Identifying and Overseeing High Risk Projects. GAO-06-647. Washington, D.C.: June 15, 2006. Financial Management Systems: Lack of Disciplined Processes Puts Effective Implementation of Treasury’s Governmentwide Financial Report System at Risk. n, D.C.: April 21, 2006. GAO-06-413. Washingto Managerial Cost Accounting Practices: Departments of Health and Human Services and Social Security Administration. GAO-06- Washington, D.C.: April 18, 2006. 599R. Financial Management Systems: DHS Has an Opportunity to Incorporate Best Practices in Modernization Efforts. GAO-06-553T. Washington, D.C.: March 29, 2006. Financial Management Systems: Additional Efforts Needed to Address Key Causes of Modernization Failures. GAO-06-184. Washington, D.C.: March 15, 2006. Managerial Cost Accounting Practices: Departments of Education, Transportation, and the Treasury. GAO-06-301R. Washington, D.C. : December 19, 2005. CFO Act of 1990: Driving the Transformation of Federal Financial Management. GAO-06-242T. Washington, D.C.: November 17, 2005. National Aeronautic Financial Management Challenges Threaten the Agency’s Ability to Manage Its Programs. GAO-06-216T. Washington, D.C.: Octob s and Space Administration: Long-standing er 27, 2005. Managerial Cost Acc Veterans Affairs. GAO-05-1031T. Washington, D.C.: September 21, 2005. ounting Practices: Departments of Labor and Managerial Cost Accounting Practices: Leadership and Internal Cont Are Key to Successful Implementation. GAO-05-1013R. Washington, D.C.: September 2, 2005. Army Depot Maintenance: Ineffective Oversight of Depot Maintenance Operations and System Implementation Efforts. GAO-05-441. Washington, D.C.: June 30, 2005. Information Technology: OMB Can Make More Effective Use o Investment Reviews. GAO-05-276. Washington f Its , D.C.: April 15, 2005.
In March 2004, the Office of Management and Budget (OMB) launched the financial management line of business (FMLOB) initiative, in part, to reduce the cost and improve the quality and performance of federal financial management systems by leveraging shared service solutions and implementing other reforms. In March 2006, GAO reported that OMB's approach did not fully integrate certain fundamental system implementation-related concepts and recommended OMB take specific actions. This report discusses (1) OMB's progress in addressing GAO's prior FMLOB recommendations and implementation challenges and (2) the effectiveness of OMB's monitoring of financial management system modernization projects and their costs. GAO's methodology included reviewing OMB's FMLOB-related guidance and reports and interviewing OMB and Financial Systems Integration Office (FSIO) staff. OMB has made progress toward implementing the FMLOB initiative. In March 2006, GAO recommended that OMB place a high priority on fully integrating four key concepts into its approach. As shown in the table, OMB has completed actions to fully address 5 of GAO's 18 recommendations. Although OMB has made progress toward completing the remaining 13 recommendations, extensive work remains before the goals of the FMLOB initiative are achieved. For example, OMB has yet to finalize a financial management system concept of operations, the first and foremost critical building block on which the remaining three concepts will be built. In addition, development of a migration timeline reflecting agencies' commitment for migrating to shared service providers has not yet been completed. Further, agencies are not required to consider migrating until the next major release of their core financial system and much work remains before the software used by shared service providers will incorporate the standard business processes currently under development. Accordingly, FSIO officials stated it could take 15 years or more before software that incorporates these standard business processes is in use governmentwide. We recognize that the FMLOB initiative represents a long-term effort; however, expediting efforts to address our prior recommendations could help achieve more effective and timely benefits. Until OMB fully integrates the four key concepts into its approach, the extent to which FMLOB goals will be achieved is uncertain. The Chief Financial Officers Act of 1990 and other information technology (IT) reform legislation contain requirements related to OMB's oversight of agency financial management systems modernization and other IT projects. Achieving FMLOB goals requires effective OMB oversight of agency modernization projects, but OMB has yet to fully address GAO's previously reported oversight-related recommendations such as taking actions to define and ensure that agencies effectively implement disciplined processes and develop a more structured review of agency efforts. In addition, OMB does not obtain and report complete and accurate data concerning agencies' spending on financial management system modernization projects. The lack of sufficient information and processes to effectively monitor agency modernization efforts and their costs limits OMB's ability to evaluate and help reduce the risks associated with financial management system implementations as well as achieve FMLOB goals.
Remittances have become an important source of financial flows to developing regions and have been resilient in the face of economic downturns. These funds can be used for various purposes, including basic consumption, housing, education, and small business formation; they can also promote financial development in cash-based economies. Because of the importance of these flows to many developing countries, in recent years, countries that send remittances and receive remittances, along with international organizations, have expressed increasing interest in understanding immigrants’ remittance practices. According to the 2000 Census, the 1990s saw the largest increase in the foreign-born population that entered the United States, compared with any other 10-year period. IMF figures show that in 2004, immigrants in the United States sent over $29.9 billion in remittances, more than any other country. Saudi Arabia was the second largest remittance-sending country; however, as shown in figure 1, the volume of remittances from Saudi Arabia has been falling since 1994, while that from the United States has been steadily increasing. For some countries, remittances constitute the single largest source of foreign currency and can often rival direct foreign investment in amounts. World Bank data show that for selected countries remittances exceed the flows of official development assistance and foreign direct investment and are relatively large compared to exports and gross national income— particularly for the Dominican Republic and the Philippines (see table 1). Remittances are also very important for those households that receive them. Table 2 shows the minimum wage per month for several developing countries as well as our computation of the 2003 per capita remittances from the United States per month. As can be seen from this table, remittances received by households on a monthly basis tend to substantially exceed the monthly minimum wage for these countries. For example, per capita, remittances to households in the Philippines are almost five times the monthly minimum wage a Filipino worker would make in the retail and service sector. The IMF collects and publishes official estimates of remittances sent from its member countries, including the United States, as part of its balance of payments statistics. The IMF currently reports the sums of “workers’ remittances” and “compensation of employees” as the best measure of total personal remittances. According to IMF, “workers’ remittances” are transfers by migrants who are employed in countries other than their birth countries and are considered residents there; “compensation of employees” is made up of wages, salaries, and other benefits earned by individuals in economies other than those in which they are residents, for work performed or paid for by residents of those economies. As a result, compensation of employees applies only to individuals away from their place of origin for less than a year. In the United States, no U.S. government agency tracks the flow of remittances through the payment system. Because of its role in compiling balance of payments statistics, BEA provides to the IMF official estimates of U.S. remittance inflows and outflows. BEA publishes remittance estimates in a different manner than reported in the IMF’s balance of payment statistics. BEA includes estimates of remittances by the foreign- born population residing in the United States to households abroad in the published item called “private remittances and other transfers.” This category is broader than the international definition of remittances, as it also includes payments or receipts of nongovernmental U.S. entities and foreign entities. Also, BEA publishes its estimates of “private remittances and other transfers” in its tables of international transactions accounts, defining it as the difference between transfers to and transfers from the United States. However, BEA provides to the IMF an estimate of remittances that flow from the United States to the world based on its underlying country-by-country tabulations. Until this year, BEA only provided this estimate to the IMF. For the first time, BEA published the estimate it provided to the IMF, as well as revised estimates back to 1991, in the July 2005 Survey of Current Business. The majority of remittances from the United States flow to Latin America, which includes Mexico, Central America, South America, and the Caribbean (see fig. 2). A large amount also flows to Asia, including the Philippines. There are many obstacles to accurately estimating remittances. First, many transactions may go through unregulated informal channels from which information cannot be garnered for inclusion in official estimates. While there are no official estimates, some experts believe that a large amount of remittances flow through this system, with market observers estimating that informal flows can range from 50 percent to 250 percent of recorded remittance flows. Second, countries do not always report remittance estimates or do not report them according to commonly held IMF definitions, which exclude transfers by the foreign born who have been in- country for less than one year. Variations in data compilation procedures occur partially due to different interpretations of definitions and classifications. In most cases, however, data weaknesses and omissions are due to difficulties in obtaining the necessary data. For example, the World Bank and other international organizations have indicated that developing countries with large remittance inflows often have a relatively weak capacity and limited resources, even though remittances are a large item in their balance of payments statistics. Countries with large remittance outflows often give lower priority to improvements in remittance statistics because they are a relatively small item in their balance of payments statistics, according to the World Bank and other international organizations. BEA uses a model to estimate remittances (which it calls “personal transfers”) from the United States. Although BEA’s methodology has some strengths, the accuracy of BEA’s estimate is uncertain for a number of reasons. BEA estimated that remittances from the United States in 2003 were $28.2 billion. To arrive at this estimate, BEA used a model that estimates remittances based on demographic information on the foreign born, such as their total number, income, and the percentage of income they remitted. In 2005, BEA revised its model for estimating remittances and incorporated more current Census Bureau data on the size and demographic characteristics of the foreign-born population of the United States; however, the model is limited particularly by lack of current data on the proportion of income immigrants were likely to remit and the assumptions BEA makes about its data. In addition, BEA uses the more current census data in a way that may double-count some immigrants. Prior to 2005, to derive its annual estimate of remittances sent from the United States, BEA developed a model comprised of three factors—the number of the foreign born, their family income, and the proportion of income remitted. The count of the foreign born, their income, and other demographic characteristics were obtained from information aggregated annually from U.S. Bureau of the Census surveys. These data were arrayed by length of residency in the United States and family types linked to marital status (e.g., married foreign head of households, native-born married to foreign-born spouse, and unmarried individuals). The remitter was assumed to be the household head. BEA extrapolated the foreign-born population derived from the 1990 Decennial Census using indicators, including the Census Bureau’s annual Current Population Survey (CPS). To estimate the proportion of income immigrants were likely to remit, BEA relied on the 1989 Legalized Population Survey (LPS1) and the 1992 Legalized Population Follow-Up Survey (LPS2), which were conducted as a result of the Immigration Reform and Control Act of 1986 (IRCA). BEA then combined the information obtained from LPS1 and LPS2 with demographic and income information obtained from the CPS to arrive at the total amount of remittances sent from the United States. For a more detailed description of BEA’s methodology for estimating remittances, see appendix II. In 2005, BEA made several revisions to its methodology to include more recent census data, and recent studies on the foreign born and their remitting behaviors. First, BEA incorporated data on the foreign-born population and their income from the 2000 Census and the American Community Survey (ACS), which is available annually, unlike decennial census data, and thus requires less extrapolation of population and income trends. According to BEA, these data will enable a better breakdown of the foreign-born population by all relevant characteristics on an annual basis. The ACS data on the number and income of the adult foreign-born population are arrayed by their gender, duration of stay, presence or absence of children, and per capita income of recipient countries and proximity to the United States. BEA then used its own judgment to determine the percentage of the adult foreign-born population that remits and the probability of remitting from information gathered from various academic studies published between 1995 and 2004, as well as LPS1 and LPS2, which BEA used in its earlier model. BEA revised its estimates back to 1991 using this new approach, which resulted in an increase in estimated remittances for all years. Figure 3 shows the data that are included in BEA’s model and how the remittance estimate is calculated. In most cases, BEA provides only a global estimate of remittances and does not publish remittance statistics about remittances from the United States to individual countries. BEA stated that some data elements are not available for some time periods or geographic areas, so it must undertake a variety of methods to fill the data gaps in order to produce the underlying tabulations needed for an aggregate estimate for the world. BEA cautions that disaggregating its estimate for the world is error-prone and expresses confidence only in its aggregate estimate. Further, according to BEA, in moving from the global estimate to increasingly smaller geographic areas or countries, the average errors in the underlying tabulations increase. When it estimates remittances for selected regions, it publishes them on a net (inflows minus outflows) basis. BEA’s approach has several strengths: in theory, it captures both formal and informal channels of sending remittances. It is also low-cost because it relies on available data and not on eliciting data from a foreign-born population that may not have an incentive to provide accurate data. However, the accuracy of BEA’s estimate is affected by the quality of the data available to BEA. A critical component of the methodology relies on information about the remitting behavior (e.g., amount, frequency) of the foreign born. Prior to 2005, the primary data available to BEA were the 1989 LPS1 and the 1992 LPS2; however, these surveys may not have been appropriate for use in estimating remittances of all the foreign born because they sampled a population participating in a special legalization program primarily aimed at Latin American immigrants. The LPS1 and LPS2 excluded undocumented aliens, temporary residents who did not wish (or were not eligible) for legal status, and legal immigrants who became legalized through processes other than IRCA. The survey design did not provide a way to more extensively sample immigrant groups more likely to remit than others (e.g. the foreign born with less than 10 years of residence in the United States). In addition, recent census data show that some basic demographic characteristics of the foreign born have changed significantly since the LPS1 and LPS2 surveys were done. BEA’s revisions to its methodology recognize these changes in the foreign born population. In its revision, BEA reviewed a number of academic studies to update the findings of the LPS1 and LPS2 and published the sources in the July 2005 Survey of Current Business, however, the estimates on the proportion of income remitted cannot be directly tracked to these source documents. Although this approach is more transparent than the prior approach of relying primarily on LPS1 and LPS2, BEA’s estimate is still affected by its “judgment” of how it incorporates information from the academic studies it is now using, and the assumptions it makes in its model. For example, two of BEA’s assumptions are that the proportion of income remitted is higher for U.S. residents from developing countries than developed countries, and that the percentage of the foreign born that remit is the same for all countries and only varies based on how long they have been in the United States. Our analysis suggests that the final BEA estimates of remittances are affected by these assumptions. We used a statistical technique that repeatedly and randomly samples from underlying data to obtain the range for 90 percent of possible estimates and determined this to be between $17.3 billion and $35.9 billion. See appendix III for the analysis we used to determine these ranges. Remittance estimation in the balance of payments framework generally separates remitters by their length of residency in host countries. All remittances are presumably sent by the foreign born who have been in the host country for greater than one year, while those that are in a country for less than a year are presumed to be temporary, earning only compensation. For this reason, some experts compile remittances as the sum of (1) the remittances sent by those in country greater than a year and (2) the compensation for those in-country for less than a year. In its description of its revised methodology, BEA states that it excludes transfers by the foreign born who have been in the United States for less than 1 year from its measure of remittances; however, BEA uses a U.S.-residency-duration grouping of 0-5 years in its personal remittances calculation. It thus includes both employees who are in the United States for less than or equal to 1 year, and migrants who are in the United States for more than a year, in its estimates of personal remittances. Our analysis determined that BEA’s estimates of remittances are therefore potentially overstated by up to $377 million because they include estimates for approximately 467,000 foreign- born individuals who were in their first year of residency in the United States, according to 2003 ACS data. Some central banks and IDB use a variety of methodologies and data sources to estimate remittances. The central banks of Mexico and the Philippines, two of the major recipients of remittances from the United States, track funds coming into their countries. The IDB, a multilateral organization that provides financing for economic, social, and institutional development projects for Latin America and the Caribbean, estimates remittances on a regional basis—primarily through the use of surveys. The remittance estimates produced by these methodologies vary from each other and from BEA’s estimates, thus further illustrating the dependency of estimates on their methods and data. The Central Bank of Mexico, known as the Banco de México (Banxico), tracks remittance flows to Mexico with the help of a regulatory reporting requirement on money transmitters. Since 2003, Mexico’s methodology for estimating remittances has required firms that receive remittances to report the amount of money received and the number of transactions conducted between the United States and Mexico on a monthly basis. A Banxico official stated that the firms’ systems that channel the information to Banxico are designed to transfer money from person to person and that the firms make the determination if a transaction is a person-to-person transfer. He stated that these systems are not efficient enough for commercial transactions; the likelihood that other types of transactions may be getting into the systems is negligible because the systems that have been developed are designed for personal remittances. The Banxico official stated that Banxico is confident in its estimates because it believes the vast majority of firms (about 90 percent) are reporting and, while some transactions that are not personal remittances may be getting through, this is a very small portion. To track remittances through informal channels such as couriers, at the U.S.-Mexico border Banxico conducts a survey of Mexicans returning to visit relatives. The survey asks questions about funds and goods they are bringing to relatives. However, these individuals, according to the Banxico official, are often reluctant to answer these questions. The Philippine government has established a formal program whereby it registers and tracks its resident Overseas Filipino Workers (OFW). This program provides data to the government on the type of employment these workers obtain as well as their salaries. The Philippine central bank, known as the Bangko Sentral ng Pilipinas (BSP), estimates remittances channeled into banks, which are already net of living expenses of these workers. However, BSP officials caution that the country source data are not truly reflective of remittances coming from a country, particularly from the United States, because most remittance centers for OFWs (e.g., Saudi Arabia, Japan, and Taiwan) send funds through correspondent banks in the United States, which then send the funds to banks in the Philippines. The BSP only captures the most immediate source of OFWs’ funds coming into the Philippines, primarily U.S. correspondent banks. Thus, this methodology overstates the funds being remitted from the United States to the Philippines because it includes funds from other countries, not just from Filipino workers in the United States. The BSP also recently revised its methodology to track remittances that flow outside of banks using results of the Survey of Overseas Filipinos. Specifically, these remittances are funds sent by OFWs through friends and relatives, or amounts brought in by OFWs when they return home. This revision caused the BSP to increase its overall estimate of remittances into the Philippines by $1.7 billion (20 percent) in 2004. BSP officials stated that they are in the process of updating prior years’ figures. The primary advantages of these tracking methodologies are that they capture actual or projected remittance flows, as well as rapid or sudden changes in the characteristics of remitters—such as the average amount remitted or the frequency of remitting. However, these methods are limited in their ability to capture remittances sent through the informal sector and to distinguish between personal remittances and other types of personal business transactions when money transfer operators and banks do not correctly code the remittance transactions. Since the year 2000, the Multilateral Investment Fund (MIF) of the IDB has been studying the issue of remittances and their impact on the development of the Latin American and Caribbean region. In addition to using its own researchers, MIF’s methodology uses remittance information collected by other researchers. The IDB remittance estimates for selected Latin American and Caribbean countries are obtained from a combination of sources consisting of estimates from selected central banks of recipient member countries judged to have reasonable remittance estimates, transaction information from remittance transfer companies to selected countries, and information obtained from surveys of remittance senders in the United States and remittance recipients in Latin American and Caribbean countries. IDB officials stated that they compare the remittance estimates that they derive from their surveys of remittance recipients in Latin America and the Caribbean with the estimates from the central banks of these countries. These officials also stated that these surveys have allowed them to estimate remittances these countries have received from the United States. According to IDB officials, for countries for which they have not conducted an in-country survey, they use data collected from establishments that facilitate money transfers to each country. These officials indicated that data were obtained from a sample of 45 money- transfer businesses involving approximately 14 countries. The amount and frequency of the average remittance sent by residents from the survey countries was used to estimate the total remittance outflow to each country, according to IDB officials. They also indicated that MIF staff work with the researchers to reconcile the various estimates and arrive at country-specific estimates they believe are fairly accurate. For a more detailed description of IDB’s methodology, see appendix IV. The advantage of using this method to estimate remittances is that the information is obtained from establishments that have a vested interest in maintaining accurate data on the amount and volume of remittances. However, estimates relying on reporting of information from remittance providers in the formal financial sector—such as money transfer operators—cannot account for remittances sent through the informal sector (e.g., by couriers or hawalas). In addition, they may not be able to distinguish between personal remittances and other types of personal business transactions if the money transfer operators and banks do not code the remittance transactions correctly. Although the consumer surveys IDB used to derive its estimates collect information directly from remittance senders and receivers, such surveys are difficult to administer because remittance senders may be reluctant to participate in the surveys due to language barriers, legal status, and lack of experience with institutions that administer surveys. IDB officials also stated that surveys only reach individuals with telephones. In addition, with these surveys there often is a discrepancy between the amount of funds remittance senders claim to send and the amount remittance recipients claim to receive. Finally, these surveys can be more costly due to the need to hire experienced survey firms with bilingual staff. The central banks of Mexico and the Philippines, the IDB, and BEA use different methodologies to estimate remittances, resulting in a range of estimates. For example, in 2003, the Mexican central bank estimated that Mexico received about $13.4 billion in remittances from the United States and the IDB estimated that Mexico received almost $12.9 billion in remittances from the United States. In 2003, BEA estimated the amount of remittances from the United States to Mexico at $8.9 billion. In terms of remittances from the United States to Latin America and the Caribbean, in 2003, the IDB estimated this to be $30.1 billion. Although BEA does not publish remittance estimates by region, we aggregated BEA’s country-by- country tabulations to estimate remittances to Latin America and the Caribbean, and found this to be $17.9 billion. We found that the reasons for the large discrepancies in the IDB and BEA’s estimates for Latin America and the Caribbean were primarily due to differences in population size, the percentage of persons that remit, and the average remittance amount per year each used. Our analysis of BEA estimates of remittances from the United States in 2003 to 21 countries for which IDB also makes estimates show that BEA assumes that 54 percent of the foreign born population remits an average of $2,076 per year as shown in table 3. BEA assumes that the percentage of adult foreign born that remit varies by duration of stay and the absence or presence of children in the household. To determine the $2,076 that is, on average, remitted per year, we used information from BEA's underlying tabulations and calculated the average remittance per person for the 21 countries. BEA assumes that the percent of income remitted varies by the presence or absence of children, the type of countries of birth (according to economic development), and proximity to the United States. In contrast, based on our analysis of IDB’s survey results, 70 percent of percent of adult foreign-born Hispanics remit and on average, they remit $3,024 per year, as shown in table 3. BEA is involved in international efforts that began in January 2005 to try and improve upon the collection and reporting of remittance data; however, it is too early to tell how successful these initiatives will be. Currently, remittance data are incomplete and cannot be reconciled because of inconsistency in the various institutions’ methods of collecting and reporting remittance data. Recognizing the importance of remittances and the need for improved data, the governments of the G8 at the Sea Island Summit in 2004 called for the establishment of a working group to improve remittance statistics. BEA is an active member of an international group supporting this effort, which recommended an agreed upon definition of remittances. In June 2006, a new group will also start an effort to improve guidance on collecting and reporting remittance data. BEA expects to be invited to serve on this group. The international estimates of remittances vary by the methods used and the coverage, quality, and reliability of the data, making comparisons of such estimates difficult. In principle, the combined inflows and outflows for all countries should equal zero—as the outflows from one country or international organization become the inflows of another. However, many countries do not provide information on both remittance inflows and outflows, resulting in global remittance figures that do not reconcile. Figure 4 shows the remittance inflows (credits) and outflows (debits) from 1990 through 2003. If global remittance figures reconciled, the lines in this figure would be the same. However, as can be seen from the figure, while the lines were fairly close prior to 1998, since then they have diverged with countries showing remittance inflows (primarily developing countries) larger than remittance outflows (primarily developed countries). The IMF accepts member countries’ estimates of remittances at their face value because, according to IMF officials, all methods of estimating remittances have their weaknesses. According to IMF officials, the choice of methodology is primarily related to the availability of resources. IMF officials indicated that they were not aware of any country that has institutionalized household surveys to generate remittance data. Remittance estimates submitted by IMF member countries do not reveal the methodologies used for the estimates. However, according to IMF officials, most countries report their remittances as residuals of existing data; others simply do not report remittances. In 2004, at the annual G8 meeting in Sea Island, Georgia, leaders of the G8 countries recognized the important role remittances play and called upon international financial institutions such as the World Bank and the IMF to lead a global effort to improve remittance statistics. As a result, the World Bank, IMF, and the United Nations formed the International Working Group on Improving Data on Remittances. This group delegated the tasks of clarifying concepts and definitions on remittances and addressing compilation issues to other groups. The working group met in January 2005 and included BEA and representatives from key remittance-sending countries, one key remittance-receiving country, and the Organization for Economic Cooperation and Development. The working group’s first objective was to clarify the definition of remittances. The group agreed that the United Nations Technical Subgroup on the Movement of Natural Persons (TSG), of which BEA is a member, should be the forum to discuss improvements in concepts and definitions for remittances. The TSG recommended, among other things, that the “workers’ remittances” item in the balance of payments be replaced with a new component called “personal transfers,” which would include all current transfers (in cash or in kind) sent or received by resident households to or from nonresident households. This new component would not be based on employment or migration status and would resolve the inconsistencies associated with “workers’ remittances.” This new definition was discussed at the June 2005 meeting of the IMF Committee on Balance of Payments Statistics. BEA officials stated that they have begun using this new definition; however, it will be included in the publication of the revised Balance of Payments Manual, which is scheduled to be completed in 2008. The second objective of the working group was to improve guidance on collecting and compiling remittance statistics, including the use of household surveys, if needed. The working group agreed that it would be useful to form a core group of compilers to review methods and develop more detailed guidance for compiling remittances data. Eurostat, the statistical office of the European Communities, offered to host the first meeting in June 2006 in Luxembourg, thereby creating the “Luxembourg Group,” which includes the World Bank and IMF’s statistics department. The Luxembourg Group will review, among other things, the extent to which household survey data can be used to improve balance of payment statistics. BEA expects to be invited to serve on this group. According to the IMF, the prerequisite to the group’s success is the commitment of national compilers to share their methodologies. The progress of this group will be reviewed by the IMF Committee on Balance of Payments Statistics, of which BEA is a member. No date has been set for this group to complete its work. In the meantime, the international working group will coordinate with a recent project conducted by the Center for Latin America Monetary Studies to improve central bank remittance reporting and procedures. This project is supported by the MIF. The final report of the working group is to be presented by the end of September 2006, so that initial work of the Luxembourg Group can be incorporated. In recent years, remittances have received growing attention from policy makers because major industrial countries began to understand the magnitude and importance of these flows to developing countries. By their nature, remittance flows are difficult to measure. Some remittances move through informal channels that official data often cannot easily or reliably measure. Countries define remittances differently and use various methodologies to estimate them; it is therefore not surprising that estimates vary widely. Although there are international efforts in which BEA participates to improve remittance statistics, two issues suggest the challenges facing these efforts. First, current remittance data are incomplete globally and cannot be easily reconciled because of the inconsistency in the methods of collecting and reporting remittance data. Second, for source countries, remittances constitute a small share of their overall economy—thus there may not be enough incentives for these countries to improve their remittance estimates. For recipient countries, remittances constitute a larger share of the economy; but these countries lack the resources to improve their statistics. International efforts to improve remittance statistics have begun recently, and it is too soon to tell whether these efforts will improve the accuracy of remittance statistics. In the United States, remittance estimates are important for agencies such as Treasury and the Federal Reserve; more accurate remittance estimates could help them better target their financial infrastructure and automated- clearinghouse remittances programs. With better data on remittances, the U.S. government could make better decisions about how much (and what kind) of development assistance to provide, and U.S. companies could make better decisions regarding foreign direct investment. As remittance flows from the United States continue to grow, U.S. policy makers may want to explore options for improving the accuracy of U.S. remittance statistics—such as conducting a new survey to determine the remitting behavior of U.S. immigrants, or adding specific questions to current government surveys to obtain better information. The Departments of Commerce and the Treasury provided written comments on the draft report, which are reproduced in appendixes V and VI, respectively. Commerce also provided technical comments, which we incorporated into the report as appropriate. Treasury concurred with our observations, especially on the need for more accurate remittance data to provide policy makers with the information necessary to improve their decision-making process. Commerce concurred with most of our observations. Specifically, they concurred that estimates of remittances from the United States derived by BEA and those of foreign governments and international organizations differ substantially and that there are several methodological reasons for these differences. Commerce also concurred that more accurate estimates would enable users of remittance data to make better informed decisions. Commerce, however, stated its view that BEA’s estimates are lower than most of the others we discuss because we compare BEA’s estimate of personal gifts to foreign residents (personal transfers) with much broader estimates of remittances, which include compensation paid to foreign workers who are temporarily employed in the United States. Commerce believes that a substantial portion of the differences between BEA’s estimates and those of other government or international organizations is accounted for by this definitional difference. Contrary to Commerce’s view, compensation paid to foreign workers temporarily employed in the United States was not included in the remittances estimates with which we compared BEA’s personal transfers estimates. We therefore do not believe that the differences among the estimates we discuss in our report are due to this definitional difference. Commerce further stated that some countries may overestimate their receipts of remittances from the United States because remittances may be channeled through banks in the United States from remitters not living in the United States. Of the countries we discuss in this report, we found this only to be true for the Philippines and, for this reason, we do not compare BEA’s remittances estimate to that of the central bank of the Philippines. As we discuss in our report, efforts are underway to improve remittance statistics, which may help make estimates more comparable in the future. We are sending copies of this report to the Department of Commerce, Treasury, the Chairman and Ranking Minority Member of the House Committee on Financial Services, and other interested congressional committees. We will also make copies available to others on request. In addition, this report will be available at no cost on our Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-2717 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Our reporting objectives were to examine (1) the methodology the Bureau of Economic Analysis (BEA) uses to develop the official U.S. estimates on the volume of remittances from the United States, (2) methodologies used by other countries and multilateral institutions to estimate remittances from the United States, and (3) international efforts to improve the collection and reporting of remittance data. To understand the methodology BEA used to derive its estimate of remittances from the United States, we met several times with BEA officials responsible for developing the estimate. They provided us with the 2003 estimate on the total volume of remittances from the United States to the rest of the world—and explained how they provide this number to the International Monetary Fund (IMF)—so that the U.S. figures can be presented in the IMF’s balance of payments statistics. We also obtained documentation describing BEA’s methodology before 2005, including BEA’s Survey of Current Business and other written documentation. BEA officials provided us with examples of the various data used in their model to calculate their remittance estimate. In addition, we provided BEA with numerous follow-up questions about their methodology, and they provided us with written responses. To understand BEA’s revised methodology, we obtained relevant documentation from BEA and provided follow-up questions to BEA. We also met with the U.S. Census Bureau to understand the data underlying BEA’s methodology for estimating remittances. To understand how we evaluated the statistical reliability of BEA’s estimate for 2003, see appendix III. We interviewed remittance experts from the IMF, World Bank, Inter-American Development Bank (IDB), and academia to obtain their views on BEA’s (and alternative) methodologies. To understand the methodologies used by other countries and multilateral institutions to estimate U.S. remittances to specific countries and regions, we met with officials from the IDB and their external consultant, the Asian Development Bank, the African Development Bank, as well as the Mexican and Philippine Central Banks. The IDB provided remittance estimates from the United States to specific countries in Latin America, the Caribbean, and to the region as a whole. The Asian and African Development Bank do not provide estimates for their respective regions. The Central Bank of Mexico provided estimates of remittances received by Mexico from the United States, while the Central Bank of the Philippines provided estimates of remittances received by the Philippines from the United States. In meetings with these entities, we obtained an understanding of the methodologies used to estimate remittances, the reasons for using these methodologies, and their strengths and potential limitations. We also obtained a report that described IDB’s methodology. Further, we obtained government regulations from Mexico and the Philippines to understand what financial institutions are required to report to Central Banks so that they can estimate remittances. To compare remittance estimates obtained from the Mexican Central Bank and IDB with those of BEA, we obtained BEA’s 2003 estimates of remittances to specific countries. BEA officials cautioned us that the estimates to specific countries are less reliable than their overall remittance estimate and stated that these numbers should not be considered BEA estimates to specific countries. Given our understanding that remittance estimates vary for a number of reasons and that international efforts are under way to improve remittance statistics, it was not possible for us to cross check the estimates of remittances from the United States against any accurate known amount. Because of this, for the purposes of this report, we focused on understanding the methodologies used by BEA, IDB, and the Central Banks of Mexico and the Philippines, to estimate remittance from the United States. We also focused on understanding the strengths and limitations of the methodologies of the BEA and the other entities to obtain a better understanding of the reasonableness of their approaches to estimating remittances. We presented BEA’s estimates and the estimates of IDB and the Central Bank of Mexico to show the range of estimates generated from different methodologies, rather than as a statement of their being precise measurements of remittances. We chose not to present the Central Bank of the Philippine’s estimate of remittances because central bank officials stated that their current methodology could not be used to report on remittances solely received from the United States. To obtain a global perspective on international efforts to improve the collection and reporting of remittances, we met with officials from the IMF, World Bank, IDB, Asian Development Bank, African Development Bank, and experts in the field of remittances. We reviewed IMF documents on remittances as they are discussed in the balance of payments framework and reviewed IMF balance of payments statistics to get a sense for which countries regularly report on remittances. We obtained limited documentation (e.g., minutes from meetings) on international efforts to improve the collection and reporting of remittances. BEA and the U.S. Department of the Treasury (Treasury) also provided us with descriptions of these international efforts and identified the U.S. government officials that participate in these international bodies. Our work was performed in San Francisco, California; and Washington, D.C., from December 2004 to March 2006 in accordance with generally accepted government auditing standards. BEA’s model to estimate remittances combines data on the number of the adult foreign-born population living in the United States, the percentage of the adult foreign-born population that remits, the income of the adult foreign-born population, and the percentage of income that is remitted by the adult foreign-born population. BEA first multiplies the foreign-born population, arrayed by selected demographic characteristics, by the percentage of the foreign-born population that remits to obtain the population of remitters. BEA then multiplies the average per capita income of the foreign-born population by the percentage of income remitted by those who remit to obtain per capita remittances. Finally, BEA multiplies per capita remittances by the population of remitters to obtain total personal transfers. BEA obtains estimates on the adult foreign-born population by place of birth and their average income from the American Community Survey (ACS), arranged by duration of stay in the United States, gender, and presence of children in the household. BEA obtains estimates of the percentage of the adult foreign-born population that send remittances to their country of origin from various academic studies, in addition to the 1989 Legalized Population Survey (LPS1) and the 1992 Legalized Population Follow-Up Survey (LPS2); however, the estimates it uses cannot be directly tracked to these source documents. BEA obtains these proportions by making assumptions based on its judgment. BEA assumes that the place of birth of the adult foreign-born population does not affect the likelihood of remitting but that it does affect the percentage of income remitted. BEA also assumes that, once the presence of children in the household and the duration of stay are accounted for, men and women are equally likely to remit. In effect, only the presence of children in the household and the duration of stay determines the percentage of the adult foreign-born population that remit to their countries of birth under these assumptions, as shown in figure 5. To determine the percentage of income that the adult foreign-born population remits, BEA makes assumptions about the development status and proximity of the country of origin of the adult foreign-born population, along with the presence of children in the U.S. household. BEA groups countries of origin into four categories indicating their propensity to send remittances and representing highest-remitting, high-remitting, medium- remitting, and low-remitting countries of birth. The highest-remitting countries are closest to the United States, while other developing countries are either high-remitting or middle-remitting, depending on their development status. Low-remitting countries are generally developed economies. Figure 6 shows that the percentage of income remitted varies by the presence of children and country groupings. Although average incomes are lower for women than for men, BEA assumes that the percentage of income remitted does not vary by gender. Furthermore, BEA assumes the duration of stay is negatively associated with likelihood to remit—but that it has no effect on the percentage remitted. Also, BEA assumes that there are no variations in the portion remitted for countries designated as low remitting. Table 4 shows the application of BEA’s methodology in estimating remittances from the United States in 2003. As can be seen from table 4, estimated total remittances are $28 billion. Also, as can be seen in table 4, in 2003, the Latin America and Caribbean region was the largest recipient region of remittances from the United States. Remittances to Asia and Africa represented approximately 24 percent and 4 percent of the total for the United States, respectively. BEA’s revised methodology uses a U.S. residency duration of 0-5 years as its first category. This means that it includes both the foreign-born population, who are in the United States for less than or equal to 1 year, and those who are in the United States for more than a year. The definition of “remittances” is the portion of income sent as remittances by those who have resided in the United States for more than one year, thus excluding the foreign-born population residing in the United States for less than one year. BEA’s estimate of remittances is in effect overstated, because it includes the foreign-born population that has resided in the United States for less than a year. In contrast, “compensation of employees” is the wages and salaries earned by individuals in economies other than those in which they are residents. As a result, compensation of employees, which applies only to individuals away from their place of origin for less than a year, may be double counted. Furthermore, the inclusion of the foreign born who have resided in the United States for less than one year would overstate estimates of total remittances (personal remittances and compensation of employees) as some portion of the compensation of employees would be double counted. BEA officials stated that their objective is to estimate remittances for individuals who have been in the United States for more than one year and those who have been in the United States for less than a year but intend to stay for more than a year. They stated that the ACS surveyed only individuals who indicated the United States is their “usual place of residence,” which may exclude temporary residents, i.e., those who have been in the United States for less than a year. ACS documents show that individuals are surveyed at their “current residence” and that one of the goals of the ACS is to identify whether individuals are residing at the “current residence” or their “usual place of residence.” Thus, the ACS does not exclude individuals for which the United States is not their “usual place of residence.” The ACS manual on residency rules states that the term “current residence” is unique to the ACS; most other surveys, including the decennial census, use “usual residence,” as defined as the place where a person lives and sleeps most of the time or considers to be his or her usual residence. ACS defines current residence as one place of residence at any point in time, but this residence does not have to be the same place throughout the year. The criteria used to determine a person’s current residence is based upon a “2-month rule” stating that (1) if a person is staying in a sample unit at the time of the survey contact and is staying there for more than 2 months, he or she is a current resident of the unit; (2) if a person who usually lives in the unit is away for more than 2 months at the time of the survey, he or she is not a current resident of the unit; and (3) if anyone is staying in the unit at the time of contact who has no other place where they usually stay longer than 2 months, he or she is a current resident of the unit regardless of how long he or she is staying there. We recalculated BEA’s estimates of 2003 remittances excluding the foreign born who have resided in the United States for less than a year. This calculation resulted in a reduction of $377 million in BEA’s 2003 estimate for remittances from the United States (see table 5). BEA publishes single-value estimates of remittances to the rest of the world by foreign-born U.S. residents. To evaluate the statistical reliability of the estimate for 2003, we derived the estimate’s probable range and its corresponding breakdown into regional estimates. To accomplish this, we obtained details of the BEA’s underlying tabulations of remittances by country. We replicated the BEA methodology to obtain BEA’s estimate for the world and for each country in its underlying tabulation. In particular, we used BEA’s underlying tabulation and included additional information (e.g., the standard deviation and the shape of the distribution of each data series) from the sources that BEA primarily used to arrive at its estimate. We calculated the respective standard deviations of the values that BEA uses for the propensity to remit and the percentage of the foreign born that remit. BEA uses a variety of sources to estimate the propensity of the foreign born to remit and the percentage of the foreign born that remit. However, BEA stated that the values chosen cannot be linked to any specific source. BEA primarily used the LPS, a survey mandated by the Immigration Reform and Control Act of 1986 to estimate the portion of income that the foreign born in the United States were likely to remit; thus, we also relied on this data. We assumed that the distribution around the means of the variables used in the BEA methodology were lognormal to satisfy (1) the nonnegativity of the values used and (2) a desired bell-shaped distribution for the estimates. We converted the BEA estimation process from one that relied solely on the averages of the variables underlying the BEA methodology to one that accounts for the variation around the mean and its distribution. We used a Monte Carlo statistical technique—a technique that repeatedly and randomly samples from the underlying data—to obtain a range of possible values for each estimate due to the uncertainty in BEA’s judgmentally determined variables on the foreign born propensities to remit and percentage of the foreign born that remit. Table 6 shows the regional breakdown of BEA’s 2003 estimate and the statistically derived range for these estimates. In table 6, we show in the column labeled “BEA point estimate”—the regional components of BEA’s global estimate in 2003—obtained by aggregating the underlying country- by-country tabulations. We also show in the following two columns the range of estimates obtained by our uncertainty analysis, assuming that this uncertainty is only due to BEA’s judgmentally determined variables. In table 6, BEA reported $28 billion in total remittances from the United States for 2003; however, we estimate that the range for 90 percent of the remittance estimates from the United States would be between $17.3 and $35.9 billion. To estimate remittances from the United States to Latin America in 2003, the IDB contracted researchers to survey Latin Americans aged 18 years or older and living in the United States. These researchers queried Latin American immigrants living in various states of the United States about their remittance experiences. The survey interviewed 3,802 households in 37 states and the District of Columbia from January through April 2004. The survey showed that 61 percent of Latin Americans send remittances to their countries of origin, sending an average of $240 approximately 12.6 times per year. IDB extrapolated the results of the survey to the total population of adult Latin American immigrants in the United States— estimated at 16.9 million in 2003—and estimated remittances from the United States to Latin America to be $30.1 billion for that year. Figure 7 provides a diagram of the methodology IDB used to arrive at the $30.1 billion estimate. According to IDB, the estimate captured remittance flows through the formal and informal sectors. The IDB also used the survey to estimate remittances from each of the 37 states and the District of Columbia. To obtain the state-by-state remittance estimates, the IDB obtained estimates for the average amount remitted and the number of times sent in one year by the Latin American immigrant population in each state and the percentage of the Latin American immigrant population in each state that sends remittances. The IDB remittance estimates for selected Latin American and Caribbean countries are obtained from a combination of sources consisting of estimates from selected central banks of recipient member countries judged to have reasonable remittance estimates, transaction information from remittance transfer companies to selected countries, and from information obtained from researchers’ surveys of remittance senders in the United States and remittance recipients in Latin American and Caribbean countries. According to IDB officials, for countries where no in- country survey has been conducted, data from establishments facilitating money transfers to each country was used. These officials indicated that data were obtained from a sample of 45 money transfer businesses to approximately 14 countries. The amount and frequency of the average remittance sent by residents from the survey countries was used to estimate the total remittance outflow to each country, according to IDB officials. They also indicated that Multilateral Investment Fund (MIF) staff work with the researchers to reconcile the various estimates and arrive at country-specific estimates. Table 7 shows the IDB estimate of remittances that 21 Latin American and Caribbean countries received in total in 2003, and from the United States the same year. As indicated earlier, the IDB and BEA used different methodologies to estimate remittances, resulting in a range of estimates. While, in most cases, BEA provides only a global estimate of remittances and not bilateral estimates, BEA provided us with country-by-country tabulations that enabled us to construct estimates for the same 21 countries that IDB provided estimates for in 2003. As shown in table 8, IDB and BEA’s estimates vary; IDB’s estimates in general tend to be higher than estimates from BEA’s underlying country tables. However, for Guyana, Panama, and Trinidad and Tobago, BEA’s estimates are higher. The last column computes the difference between the estimates for each country as a percentage of the average of the estimates. The average percentage difference is 72 percent, with a low of 7 percent for Jamaica and a high of 168 percent for Brazil. The following are GAO’s comments on the Department of Commerce’s March 10, 2006, letter. 1. BEA commented on the Highlights page that the IDB estimates differ from BEA’s estimates because the IDB estimate includes “net compensation” of foreign workers and the BEA estimate does not. BEA also commented that data provided by foreign central banks and financial establishments are sometimes overstated because U.S. correspondent banks are used in transmitting funds for senders not living in the United States. We disagree with BEA on these points. This “net compensation” of foreign workers is a new concept that was just proposed by the Technical Subgroup on the Movement of Natural Persons (TSG) in June 2005, and we are not aware of any remittances estimates for 2003 that use this definition. Further, IDB never stated that any of the funds accounted for in their estimates came through U.S. correspondent banks for workers who were not located in the United States. This was true for the Philippines, which we noted in the report. BEA also commented that IDB’s estimates are substantially derived from data reported from central banks and private money transfer establishments. BEA is correct on the latter point and we have corrected the Highlights page to be consistent with the letter and reflect that IDB uses a variety of sources in making its remittances estimates. 2. BEA suggested that we place Mexico in North America or create a separate bar in our graphic in the Highlights page for Mexico. In this report, we used the United Nations’ Standard Country and Area Codes Classification, which places Mexico in Central America. 3. BEA commented that to develop an estimate that corresponds to our definition of remittances, we should have used BEA’s estimates of personal transfers and compensation of employees, net of their expenditures. However, we make it clear in footnote 6 that we are focusing only on personal transfers and that we call these remittances for the purpose of this report. 4. BEA states that it has confirmed with the Bank of Mexico that Mexico’s estimates of remittances include net compensation of migrant Mexican workers in the United States. BEA states that if we added BEA’s net compensation of employees figure to its estimate of personal transfers, the two figures for 2003 would be closer. As stated above, this new definition was proposed in June 2005, and, to our knowledge, the Mexican central bank has not published 2003 figures for “net compensation” of employees. The Mexican central bank figures for 2003 as reported by the IMF in its balance of payments statistics are almost $13.4 billion for workers’ remittances, which we use in our report, and $1.5 billion in compensation of employees. The $12.9 billion estimate BEA attributes in its comments to the Mexican central bank is the IDB’s estimate. 5. BEA commented that the data used in our analysis of the potential effects of BEA’s judgmentally determined values in its remittance estimating methodology are unclear, as are the particulars of our modeling technique. As we stated, we replicated BEA’s methodology using its underlying tabulation of remittances by country and included additional information from the sources that BEA primarily used to arrive at its estimate. BEA further stated that there is a very small probability that the BEA estimate would be near the end points of the intervals and suggested that we use the midpoint of the intervals instead. As explained in appendix III, the purpose of our analysis was to show the effect of BEA’s judgmentally determined values on its estimate $28.03 billion in 2003. Using a range illustrates the uncertainty in BEA’s estimate. BEA also commented on our use of the lognormal distribution for the percentage of income remitted and the percentage of the adult foreign born population that remit. We chose the lognormal distribution because it satisfied the requirements that both of these variables were nonnegative and distributed in a bell-shaped curve. 6. BEA commented that we left the impression that BEA’s estimates of personal transfers contain a double count of $377 million and that any double count that may exist probably involves the compensation of employees, not the personal transfers account. We modified the text of our report to reflect that BEA’s personal transfers are therefore potentially overstated by up to $377 million because BEA’s estimate includes remittances sent by some of the foreign born who have been in the United States for less than one year. 7. Commerce reiterated its concerns about our comparison between BEA’s estimates and those of other organizations. Commerce restated its view that the methods used by the Mexican central bank and others capture both remittances and compensation of employees and further stated that BEA’s estimates for personal transfers and compensation of employees should be summed when making these comparisons to other organizations. However, none of the organizations with which we compare BEA’s estimates indicated that their methods captured compensation of employees, therefore, we believe our comparisons are appropriate. 8. BEA states that the TSG now recommends that “personal transfers” also include capital transfers. This is incorrect. The paper BOPCOM- 05/9 states that the TSG agreed to define “personal transfers” as consisting of all current transfers in cash or in kind. 9. BEA disagreed with our statement that remittance data cannot be reconciled and stated that, because reconciliation projects are resource intensive and difficult, BEA must choose the statistical items it reconciles with which trading partners. We concur that reconciliation cannot be done easily. However, our observations were on reconciliation of remittance data on a global level, not between individual countries, as shown in figure 4. The global discrepancy has grown in recent years. In addition to the contact named above, Barbara I. Keller, Assistant Director; Gezu Bekele; Tania Calhoun; Lynn Cothern; William R. Chatlos; Bruce L. Kutnick; James M. McDermott; Marc M. Molino; José R. Peña; and Rachel Seid made key contributions to this report.
Remittances are the personal funds that the foreign born send to their home countries. In recent years, estimated remittances have grown dramatically, and policy makers have increased their attention to these flows. Organizations use various methodologies to estimate remittance flows, which result in a range of estimates. In 2004, the Group of Eight (G8) leaders emphasized the need for improved statistical data on remittances. In light of the growing volume of remittances and the differences in estimates, GAO examined (1) the methodology that the Bureau of Economic Analysis (BEA) uses to develop the official U.S. estimate, (2) methodologies that other countries and multilateral organizations use to estimate remittances, and (3) international efforts to improve the collection and reporting of remittance data. BEA uses a model to estimate remittances from the United States and, although the methodology has some strengths, the accuracy of BEA's estimate is uncertain for several reasons. BEA estimated remittances for 2003 at $28.2 billion; its model used data on the number of foreign-born residents, their income, the proportion of income that is remitted, and other demographic data. The strengths of BEA's methodology are that, in theory, it estimates remittances sent through formal and informal channels. It also is low-cost because it uses existing data on the foreign born. However, BEA's methodology was limited by the quality and timeliness of the data, particularly on the portion of income likely to be remitted. BEA revised its model in 2005 to use new data sources, but the accuracy of its estimates depends on the accuracy of its assumptions regarding the remitting behavior of the foreign born and other factors. Some central banks and the Inter-American Development Bank (IDB) use different methodologies to provide estimates of remittances from the United States that vary significantly. For example, Mexico's central bank estimates remittances primarily by collecting data from money transmitters. The IDB used a variety of sources, such as surveys of remittance senders and receivers, and information from remittance transfer companies and central banks, to estimate remittances from the United States to Latin America to be $30.6 billion in 2003. We aggregated BEA's data to estimate remittances to this region to be $17.9 billion. BEA is an active participant in recent international efforts to improve remittance statistics. The World Bank and others established a remittances working group in 2005, which delegated tasks to other international groups to (1) clarify the definition of remittances and (2) provide guidance on how to collect and estimate remittances. BEA participated in the first group, which recommended a new definition of remittances. The second group will have its first meeting in June 2006.
As we testified in July 2001, controls over grant and loan disbursements did not include a key edit check or follow-up process that would help identify schools that were disbursing Pell Grants to ineligible students. To identify improper payments that may have resulted from the absence of these controls, we performed tests to identify students 70 years of age and older because we did not expect large numbers of older students to be receiving Pell Grants, and in 1993, we identified abuses in the Pell Grant program relating to older students. Based on the initial results of our tests and because of the problems we identified in the past, we expanded our review of 7 schools that had disproportionately high numbers of older students to include recipients 50 years of age and older. We found that 3 schools fraudulently disbursed about $2 million of Pell Grants to ineligible students, and another school improperly disbursed about $1.4 million of Pell Grants to ineligible students. We also identified 31 other schools that had similar disbursement patterns to those making the payments to ineligible students. These 31 schools disbursed approximately $1.6 million of Pell Grants to potentially ineligible students. We provided information on these schools to Education for follow-up. Education staff and officials told us that they have performed ad hoc reviews in the past to identify schools that disbursed Pell Grants to ineligible students and have recovered some improper payments as a result. However, Education did not have a formal, systematic process in place specifically designed to identify schools that may be improperly disbursing Pell Grants. In September 2001, we issued an interim report in which we recommended that the Secretary of Education (1) establish appropriate edit checks to identify unusual grant and loan disbursement patterns and (2) design and implement a formal, routine process to investigate unusual disbursement patterns identified by the edit checks. In our July 2001 testimony, we told you that Education decided to implement a new edit check, effective beginning with the 2002-2003 school year to identify students who are 85 years of age or older. We explained that we believed the age limit was too high and would exclude many potential ineligible students. Education subsequently lowered the age limit for that edit to 75 years of age or older. If the student’s date of birth indicates that he or she is 75 years of age or older, the system edit will reject the application and the school will not be authorized to give the student federal education funds until the student either submits a corrected date of birth or verifies that it is correct. However, without also looking for unusual patterns and following up, the edit may not be very effective, other than to correct data entry errors or confirm older students applying for aid. Education is also in the process of implementing a new system, called the Common Origination and Disbursement (COD) system, which is to become effective starting this month. Education officials told us that this integrated system will replace the separate systems Education has used for Pell Grants, direct loans, and other systems containing information on student aid, and it will integrate with applicant data in the application processing system. The focus of COD is to improve program and data integrity. If properly implemented, a byproduct of this new system should be improved controls over grant and loan disbursements. According to Education officials, they will be able to use COD to identify schools with characteristics like those we identified. However, until there is a mechanism in place to investigate schools once unusual patterns are identified, Education will continue to be vulnerable to the types of improper Pell Grant payments we identified during our review. We identified over $32 million of other potentially improper grant and loan payments. Based on supporting documentation provided to us by Education, we determined that over $21 million of these payments were proper. However, because Education did not provide adequate supporting documentation, we were unable to determine the validity of about $12 million of these transactions or conclude on the effectiveness of the related edit checks. While the amount of improper and potentially improper grant and loan payments we identified is relatively insignificant compared to the billions of dollars disbursed for these programs annually, it represents a control risk that could easily be exploited to a greater extent. During our investigation of potentially improper transactions, we found that two students submitted counterfeit Social Security cards and fraudulent birth certificates along with their applications for federal education aid, and they received almost $55,000 in direct loans and Pell Grants. The U.S. Attorney’s Office is considering prosecuting these individuals. During our tests to determine the effectiveness of Education’s edit checks, we also found data errors, such as incorrect social security numbers (SSN) of borrowers, in the Loan Origination System (LOS), which processes all loan origination data received from schools. Such errors could negatively affect the collection of student loans because without correct identifying information, Education may not be able to locate and collect from borrowers when their loans become due. We reviewed data for more than 1,600 loans and determined that for almost 500 of these loans, the borrowers’ SSNs or dates of birth were incorrect in LOS. During the application process, which is separate from the loan origination process, corrections to items such as incorrect SSNs are processed in the Central Processing System (CPS); however, these corrections are not made to data in LOS. The new COD system discussed earlier may alleviate this situation. If this system works as intended, student data should be consistent among all of the department’s systems, including CPS and LOS, because it will automatically share corrected data. However, until the new system is fully implemented, errors in LOS could impede loan collection efforts. As we testified in April and July 2001, significant internal control weaknesses over Education’s process for third party drafts markedly increased the department’s vulnerability to improper payments. Although segregation of duties is one of the most fundamental internal control concepts, we found that some individuals at Education could control the entire payment process for third party drafts. We also found that Education employees circumvented a key computer system application control designed to prevent duplicate payments. We tested third party draft transactions and identified $8.9 million of potential improper payments, $1.7 million of which remain unresolved because Education was unable to provide us with adequate supporting documentation. Education has referred the $1.7 million to the OIG for further investigation. Because of the risks we identified in the third party draft payment process, and in response to a letter from this subcommittee, Education took action in May 2001 to eliminate the use of third party drafts. In our July 2001 testimony before this subcommittee, we described internal control weaknesses over Education’s purchase card program, including lack of supervisory review and improper authorization of transactions. We found that Education’s inconsistent and inadequate authorization and review processes for purchase cards, combined with a lack of monitoring, created an environment in which improper purchases could be made with little risk of detection. Inadequate control over these expenditures, combined with the inherent risk of fraud and abuse associated with purchase cards, resulted in fraudulent, improper, and questionable purchases, totaling about $686,000, by some Education employees. During the time of our review, Education’s purchase card program was operating under policies and procedures that were implemented in 1990. The policy provided very limited guidance on what types of purchases could be made with the purchase cards. While the policy required each cardholder and approving official to receive training on their respective responsibilities, we found that several cardholders and at least one approving official were not trained. In addition, we found that only 4 of Education’s 14 offices required cardholders to obtain authorization prior to making some or all purchases, although Education’s policy required all requests to purchase items over $1,000 be made in writing to the applicable department Executive Officer. We also found that approving officials did not use monitoring reports that were available from Bank of America to identify unusual or unauthorized purchases and that only limited use was made of available mechanisms to block specific undesirable Merchant Category Codes (MCC). These factors combined resulted in a lax control environment for this inherently risky program. Education officials told us the department relied on the approving official’s review of the cardholder’s monthly purchase card statements to ensure that all purchases made by employees were proper. We tested the effectiveness of the approving officials’ review of 5 months of cardholder statements. We reviewed all 903 monthly statements that were issued during these months, totaling about $4 million, and found that 338, or 37 percent, totaling about $1.8 million, were not approved by the appropriate approving official. To determine whether improper purchases were made without being detected, we requested documentation supporting the $1.8 million of purchases that were not properly reviewed. We also requested documentation for other transactions that appeared unusual. We reviewed the documentation provided by Education and identified some fraudulent, improper, and questionable purchases, which I will discuss in a moment. We considered fraudulent purchases to be those that were unauthorized and intended for personal use. Improper purchases included those for government use that were not, or did not appear to be, for a purpose permitted by law or regulation. We also identified as improper purchases those made on the same day from the same vendor that appeared to circumvent cardholder single purchase limits. We defined questionable transactions as those that, while authorized, were for items purchased at an excessive cost, for a questionable government need, or both, as well as transactions for which Education could not provide adequate supporting documentation to enable us to determine whether the purchases were valid. We found one instance in which a cardholder made several fraudulent purchases from two Internet sites for pornographic services. The purchase card statements contained handwritten notes next to the pornography charges indicating that these were charges for transparencies and other nondescript items. According to the approving official, he was not aware of the cardholder’s day-to-day responsibilities and did not feel that he was in a position to review the monthly statements properly. The approving official stated that the primary focus of his review was to ensure there was enough money available in that particular appropriation to pay the bill. As a result of investigations related to these purchases, Education management issued a termination letter that prompted the employee to resign. We identified over $140,000 of improper purchases. For example, one employee made improper charges totaling $11,700 for herself and a coworker to attend college classes that were unrelated to their jobs at the department. We also identified improper purchases totaling $4,427 from a restaurant in San Juan, Puerto Rico. These restaurant charges were incurred during a Year 2000 focus group meeting, and included breakfasts and lunches for federal employees and nonfederal guests. Education, however, could not provide us with any evidence that the nonfederal attendees provided a direct service to the government, which is required by federal statute in order to use federal appropriated funds to pay for the costs of nonfederal individuals at such meetings. We have referred this matter to Education’s OIG. Other examples of improper purchases we identified include 28 purchases totaling $123,985 where Education employees made multiple purchases from a vendor on the same day. These purchases appear to violate the Federal Acquisition Regulation provision that prohibits splitting purchases into more than one segment to circumvent single purchase limits. For example, one cardholder purchased two computers from the same vendor at essentially the same time. Because the total cost of these computers exceeded the cardholder’s $2,500 single purchase limit, the total of $4,184.90 was split into two purchases of $2,092.45 each. In some instances, Education officials sent memos to the offending cardholders reminding them of the prohibition against split purchases. We identified five additional instances, totaling about $17,000, in which multiple purchases were made from a single vendor on the same day. Although we were unable to determine based on the available supporting documentation whether these purchases were improper, these transactions share similar characteristics with the 28 split purchases. We identified questionable purchases totaling $286,894 where Education employees paid for new office furniture and construction costs to renovate office space that they were planning to vacate. Only a small amount of furniture, including chairs for employees with special needs, was moved to the new building when department employees relocated. In addition, we identified as questionable more than $218,000 of purchases for which Education provided us with no support or inadequate support to assess the validity. For $152,000, Education could not provide any support, nor did the department know specifically what was purchased, why it was purchased, or whether these purchases were appropriate. For the remaining $66,000, Education was able to provide only limited supporting documentation. As a result, we were unable to assess the validity of these payments, and we consider these purchases to be potentially improper. After our July 2001 testimony, we issued an interim report, that described the poor internal controls over purchase cards and made recommendations that the department reiterate to all employees established policies regarding the appropriate use of government purchase cards; strengthen the process of reviewing and approving purchase card transactions, focusing on identifying split purchases and other inappropriate transactions; and expand the use of MCCs to block transactions with certain vendors. Recently, Education has made some changes in the way it administers its purchase card program in an effort to address these three recommendations. For example, in December 2001, the department issued new policies and procedures that, among other things, (1) establish detailed responsibilities for the cardholder and the approving official, (2) prohibit personal use of the card and split purchases to circumvent the cardholder’s single purchase limits, (3) require approving officials to review the appropriateness of individual purchases, (4) establish mandatory training prior to receiving the card and refresher training every 2 years, and (5) establish a quarterly quality review of a sample of purchase card transactions to ensure compliance with key aspects of the department’s policy. If appropriately implemented, these new policies and procedures are a good step toward reducing Education’s vulnerability to future improper purchases. Further, in July 2001, the department implemented a new process to approve purchase card purchases. Instead of the approving official signing a monthly statement indicating that all transactions are proper, the approval is now done electronically for each individual transaction. According to Education officials, most approving officials and cardholders received training on this new process. In order to assess the effectiveness of this new approval process, we reviewed a statistical sample of the monthly statements of cardholders for July, August, and September 2001. Purchases during these months totaled $1,881,220. While we found evidence in the department’s system that all of the 87 statistically sampled monthly statements had been reviewed by the cardholder’s approving official, 20 of the statements had inadequate or no support for items purchased, totaling $23,151. Based on our work, we estimate the most likely amount of unsupported or inadequately supported purchases during these 3 months is $65,817. The effectiveness of the department’s new approval process has been minimized because approving officials are not ensuring that adequate supporting documentation exists for all purchases. In addition, these procedures do not address the problem of an authorizing official who does not have personal knowledge of the cardholder’s daily activities and therefore is not in a position to know what types of purchases are appropriate. In response to our recommendation regarding the use of MCCs to block transactions from certain vendors, in November 2001, the department implemented blocks on purchases from a wide variety of merchants that provide goods and services totally unrelated to the department’s mission, including veterinary services, boat and snowmobile dealers, and cruise lines. In total, Education blocked more than 300 MCCs. By blocking these codes, Education has made use of a key preventive control to help reduce its exposure to future improper purchases. As we told you in our July 2001 testimony, Education took action earlier in 2001 to improve internal controls related to the use of government purchase cards by lowering the maximum monthly spending limit to $30,000, lowering other cardholders’ single purchase and total monthly purchase limits, and revoking some purchase cards. This action was in response to a letter from this subcommittee dated April 19, 2001, which highlighted our April 2001 testimony, in which we stated that some individual cardholders had monthly purchase limits as high as $300,000. These and the other steps I just discussed have helped reduce Education’s exposure to improper purchase card activities. However, more needs to be done to improve the approval function, which is key to adequate control of these activities. Education lacked adequate internal controls over computers acquired with purchase cards and third party drafts which contributed to the loss of 179 pieces of computer equipment with an aggregate purchase cost of about $211,700. From May 1998 through September 2000, Education employees used purchase cards and third party drafts to purchase more than $2.9 million of personal computers and other computer-related equipment. Such purchases were actually prohibited by Education’s purchase card policy in effect at the time. The weak controls we identified over computers acquired with purchase cards and third party drafts included inadequate physical controls— according to Education’s OIG, the department had not taken a comprehensive physical inventory for at least 2 years prior to October 2000—and lack of segregation of duties, which is one of the most fundamental internal controls. In the office where most of the missing equipment was purchased, two individuals had interchangeable responsibility for receiving more than $120,000 of computer equipment purchased by a single cardholder, from one particular vendor. In addition, these two individuals also had responsibility for bar coding the equipment, securing the equipment in a temporary storage area, and delivering the computers to the users. Furthermore, one of these two individuals was responsible for providing information on computer purchases to the person who entered the data into the department’s asset management system. According to the cardholder who purchased the equipment, they did not routinely compare the purchase request with the receiving documents from the shipping company to ensure that all items purchased were received. In addition, our review of records obtained from the computer vendor from which Education made the largest number of purchase card and third party draft purchases showed that less than half of the $614,725 worth of computers had been properly recorded in the department’s property records, thus compounding the lack of accountability over this equipment. Combined, these weaknesses created an environment in which computer equipment could be easily lost or stolen without detection. In order to identify computers that were purchased with purchase cards and third party drafts that were not included in the department’s asset management system, we obtained the serial numbers of all pieces of computer equipment purchased from the largest computer vendor the department used. We compared these serial numbers to those in the department’s asset management system and found that 384 pieces of equipment, including desktop computers, scanners, and printers totaling $399,900, appeared to be missing. In September 2001, we conducted an unannounced inventory to determine whether these computers were actually missing or were inadvertently omitted from the property records. We located 143 pieces of equipment that were not on the property records, valued at about $138,400, and determined that 241 pieces, valued at about $261,500, were missing at that time. After we completed our work in this area, we again visited the office where most of the computer equipment was missing because Education officials told us they had located some of the missing inventory. Officials in this office told us that they hired a contractor to keep track of their computers when the office moved to its new space. According to the officials, as part of its work, the contractor recorded the serial numbers of all computers moved and identified 86 of the 241 pieces of computer equipment that we were unable to locate during our unannounced inventory in September 2001. However, when Education staff and officials tried to locate this equipment, they were only able to find 73 of the 86 pieces of equipment. When we visited, we located only 62 of the 73 pieces of equipment. Education officials have been unable to locate the remaining 179 pieces of missing computer equipment with an acquisition value of about $211,700. They surmised that some of these items may have been surplused; however, there is no paperwork to determine whether this assertion is valid. According to Education officials, new policies have been implemented that do not allow individual offices to purchase computer equipment without the consent of the Office of the Chief Information Officer (OCIO). However, during our previously mentioned review of a statistical sample of purchase card transactions made from July 2001 through September 2001, we found three transactions totaling $2,231 for the purchase of computer equipment without any supporting documentation from the OCIO. Based on these results, the new policies are not being effectively implemented. This is another indication that the new purchase card approval function is not fully operating as an effective deterrent to improper purchases. In January 2002, we also reviewed the new computer ordering and receiving processes in the office where most of the missing equipment was purchased and found mixed results. These new policies are designed to maintain control over the procurement of computers and related equipment and include purchasing computers from preferred vendors that apply the department’s inventory bar code label and record the serial number of each computer on a computer disk that is sent directly to the Education official in charge of the property records; loading the computer disk containing the bar code, serial number, and description of the computer into the property records; and having an employee verify that the computers received from the vendor match the serial numbers and bar codes on the shipping documents and the approved purchase order. However, a continued lack of adequate physical control negates the effectiveness of these new procedures. For example, the doors to the two rooms used to store computer equipment waiting to be installed were both unlocked and unattended. The receptionist at the mail counter next to the first storage room we visited told us that he had the door open to regulate the room temperature. The Education official responsible for this process stated that he did not know that mailroom personnel had access to this room. Furthermore, he stated that he does not have a key to either storage room. Also, during our second search for this equipment, we visited four rooms where some of the computers were stored and found them all unsecured.
The Department of Education has a history of financial management problems, including serious internal control weaknesses, that have affected the Department's ability to provide reliable financial information on its operations. GAO found that significant internal control weaknesses in payment processes and poor physical control over its computer assets led to fraud, improper payments, and lost assets. GAO also identified instances of grant and loan fraud and pervasive control breakdowns and improper payments in other areas, particularly involving purchasing cards.
Initially, U.S. deployment plans in support of the NATO peacekeeping effort (known as Operation Joint Endeavor) called for a heavy reliance on road and rail for transporting troops and equipment into Bosnia. These early plans assumed only minimal airlift support would be needed and that would be provided by C-130s based in Europe. However, when the time available to accomplish the logistics of moving troops and equipment into Bosnia diminished and when various problems, including weather and rail strikes limited the use of ground transportation, the U.S. deployment shifted to heavy reliance on cargo aircraft. The C-130s in the theater were supplemented by C-141s, C-5s, and C-17s from Air Mobility Command to meet the increasing need for airlift within the European theater. The range of airlift requirements for the Bosnia deployment were confined primarily to intratheater support, with no airdrop or medical evacuation requirements, and only limited support provided from outside the European theater. The C-17 aircraft, which is being produced for the Air Force by the McDonnell Douglas Corporation, is designed to airlift substantial payloads over long ranges without refueling. The C-17 is planned to replace the C-141 transport aircraft in the current fleet and to complement the larger but less maneuverable C-5 aircraft. In providing airlift support, the C-17 is intended to deliver cargo and troops directly to forward airfields; fly into small, austere airfields; land on short runways; transport outsize cargo such as tanks; and airdrop troops and equipment. In August 1995, the Air Force completed a 30-day reliability, maintainability, and availability (RM&A) evaluation of the aircraft’s compliance with contractual RM&A specifications. During this evaluation, the C-17’s RM&A performance was assessed during both peace-and wartime missions, including aerial refueling, equipment and personnel airdrops, formation flying, low-level operations, and operations into small austere airfields. Wartime missions ranged from 12.5 to 26 hours, while peacetime missions ranged from 2 to 20 hours. In July 1996 we reported that unresolved questions regarding certain important C-17 capabilities still remained after the RM&A evaluation. The Office of the Director, Operational Test and Evaluation, reported in November 1995 that based on its assessment of the C-17’s operational effectiveness and suitability, the C-17 is suitable for the conduct of air-land missions and effective in the airdrop of personnel. However, the report also stated that additional testing was necessary to fully evaluate the aircraft’s capability for the mass airdrop of personnel, and that the C-17 was not effective or suitable for routine aeromedical evacuation missions until certain deficiencies were corrected. Airlift aircraft, particularly the C-17, performed a major transportation support role during the Operation Joint Endeavor deployment, which occurred between the December 1995 and February 1996 time frame. According to Air Mobility Command (AMC) data, the majority of deployment airlift missions flown were intratheater support, as were the majority of C-17 deployment missions. (See fig. 1.) Intratheater support involved moving troops and equipment over short distances within the European theater, such as from Germany to the initial staging base in Hungary, or more directly into the American sector in Bosnia. There were few intertheater deployment requirements, which would have involved moving troops and equipment from the continental United States into the European theater. Of the 3,827 airlift missions flown during the deployment time frame, 2,924 or 76.4 percent were intratheater missions. Of the 1,000 total C-17 deployment missions, 917 or 91.7 percent were intratheater missions. Airlift aircraft moved about 45,369 tons of cargo and about 18,539 passengers during the deployment. Table 1 shows the amount of cargo and passengers carried by each type of airlift aircraft. As this table shows, the C-17 flew about 26 percent of the total deployment airlift missions and carried about 44 percent of total cargo and 30 percent of total passengers. In total, the C-17 carried an average cargo load of 39,784 pounds per mission compared to the specified average cargo weight of 48,649 pounds per mission over the lifetime of the aircraft. This is based on mission profiles in C-17 contract specifications. Overall, all types of airlift aircraft carried average cargo weights per mission that were less than their maximum payload capacities. Table 2 provides a comparison of average cargo loads per aircraft type, carried during the deployment, versus maximum aircraft payload capacity. As this table shows, none of the airlift aircraft carried maximum payload capacities during the deployment period we evaluated. The C-5 carried the largest reported average cargo weight per mission of 53,192 pounds while primarily performing intertheater missions, whereas the C-17 carried an average of 39,784 pounds while primarily performing intratheater missions. AMC representatives said that cargo weight data for C-130 aircraft was particularly unreliable since C-130 operators do not require tracking of total cargo weight on a per mission basis. In responding to a draft of this report, DOD noted that cargo weight plays a critical role in airlifter performance only in relatively rare missions when armored vehicles and/or ammunition are being carried. Further, DOD stated that less than maximum cargo weight does not equate to inefficient use of aircraft since maximum cargo volume, or the maximum volume of cargo that will fit into an airlifter, is usually reached before maximum cargo weight is reached. DOD also stated that since most airlift aircraft cargo loads reach maximum volume first, it would be unusual for any airplane to carry more than 50 percent of its maximum payload weight. Finally, DOD stated that AMC tracks cargo weight since center-of-gravity information is a safety of flight issue; however, since cargo volume is not a safety of flight issue, AMC does not track cargo volume carried on any airframe in the fleet. The prime contractor for the C-17 used a variety of performance parameters to assess C-17 performance during the deployment. DOD used the same parameters to assess C-17 performance during RM&A evaluation and initial operational test and evaluation. According to the contractor, the C-17 achieved better than required performance levels for five key maintenance and repair parameters during the December 1995 through February 1996 time frame. In addition, the contractor reported the C-17 achieved a mission capable rate of 86.2 percent versus a requirement of 81.2 percent during the same time period. The C-17’s overall departure reliability and logistics departure reliability rates during the deployment also improved over those achieved during recent RM&A evaluations, according to AMC representatives. Overall departure reliability is the percentage of aircraft leaving no more than 20 minutes prior to and no later than 14 minutes after the scheduled departure time. Logistics departure reliability rate is the percentage of aircraft achieving on time departure not counting aircraft departure delays caused by weather. According to AMC, between December 19, 1995, and January 17, 1996, the C-17 achieved a logistics departure reliability rate of 97.8 percent and an overall departure reliability rate of 83.9 percent. The C-17 also performed well when moving outsize cargo, according to AMC representatives. Outsize cargo is defined as a single item that exceeds 1,000 inches long by 117 inches wide by 105 inches high in any one dimension and requires the use of a C-5 or C-17 aircraft (an M-1 tank, for example). AMC representatives listed the following examples of the C-17 moving outsize cargo during the deployment: one C-17 landed at Tuzla with a self-propelled 155-mm howitzer, a support vehicle, and trailer; seven C-17s moved 15 Bradley fighting vehicles plus support in 1 day during the deployment; and three C-17s moved 25 pontoon bridge sections to Hungary. The Bosnia deployment airlift requirements did not include the need for any airlift aircraft to perform or demonstrate several of the airlift roles and missions which the Army considers important operational capabilities for the C-17 in providing support for certain Army missions. The C-17 had trouble performing, or did not perform, several of these tasks during operational testing and the RM&A evaluation. For example, Army reports on the C-17 RM&A evaluation and initial operational testing results have raised questions regarding the C-17’s ability to operate on short, wet runways; perform personnel airdrops missions; and provide aeromedical evacuation. The Bosnia deployment did not provide the opportunity for any airlift aircraft to demonstrate these capabilities. During initial operational testing, concerns surfaced regarding the C-17’s ability to operate on short, wet runways. The Army defined a short austere airfield as a 3,000-foot long runway, either paved or unpaved, for the purpose of operational testing. Simulations have shown that, during a landing on wet unpaved surfaces, the C-17 would slide off the end of a 3,000-foot long runway. Rather, simulations suggest that C-17 landings with a full payload on a wet (paved or unpaved) surface would require a 5,000-foot runway. Since none of the runways used by any airlift aircraft during the deployment were less than 7,874 feet, the Bosnia deployment did not provide the opportunity to assess any airlifter’s ability to operate on short, wet runways. The C-17 also did not have the opportunity to demonstrate its ability to support personnel airdrops since no airlift aircraft had to fly such missions during the Operation Joint Endeavor deployment. The Army considers personnel formation airdrops a logical extension of its personnel airdrop requirement and, primarily due to safety concerns, it did not certify personnel formation airdrops for the C-17 during operational testing. According to DOD, the Army and the Air Force are jointly working to address C-17 formation personnel airdrop issues. Airlift aircraft were also not required to perform aeromedical evacuations during the Bosnia deployment. According to the Army’s report on C-17 initial operational test results, the C-17 demonstrated the capability to move 36 patients versus an Army requirement to move 48 patients in an aeromedical evacuation. Further, the Army notes that initial operational testing found a number of other deficiencies in the C-17 aircraft that make it unsuitable for use in performing routine aeromedical evacuations. But, according to AMC, all current C-17s will be capable of fulfilling designated aeromedical airlift roles by June 1997. According to DOD, in August 1996, based on the AMC Commander’s recommendation to amend the published C-17 aeromedical evacuation requirement, the requirement was changed from 48 to 36 patient litters. DOD notes that while the AMC Commander cannot change the requirement, the Commander can make declarations of capability, and the new capability for 36 litters will be reflected in an updated C-17 Operational Requirements Document. DOD believes the C-17’s performance in the Bosnia deployment validates the November 1995 Defense Acquisition Executive’s decision to procure an additional 80 C-17s, for a total of 120 aircraft. The scope of work for this report did not include a validation/invalidation of that decision. However, in our report, Military Airlift: Options Exist For Meeting Requirements While Acquiring Fewer C-17s (GAO/NSIAD-97-38, Feb. 1997), we suggested that Congress consider funding only 100 C-17s, which would save over $7 billion in life-cycle costs over the 120 C-17 aircraft program. We reported that DOD can meet mission requirements with 100 C-17s by employing various low-cost options and by extending the use of alternatives for accomplishing the extended range brigade airdrop. DOD also stated that it was inappropriate to include any discussion regarding C-17 capabilities to perform short-wet runway operations, personnel airdrops, and aeromedical evacuations in our report, since during the deployment there were no missions requiring those capabilities. We disagree. Our scope of work included an examination of the missions that the C-17 performed during the deployment and a comparison of how it was used versus its expected capabilities. A discussion of whether the C-17 had the opportunity to perform the stated capabilities during the deployment is appropriate to the discussion, since these are C-17 operational capabilities that have yet to be fully demonstrated. DOD also provided suggestions for additional comments to be included in the report. To the extent practical, those comments are reflected in the body of our report. DOD’s written comments are included in appendix I. To determine (1) how the C-17 was used during the deployment and (2) whether the deployment required airlift aircraft to perform any of the unique operational capabilities the C-17 is expected to perform, we interviewed officials and obtained, reviewed, and analyzed reports and electronic airlift transportation performance information. This information was provided by the U.S. Transportation Command and AMC. We also interviewed deployment airlift customers and analyzed reports and data available from the U.S. European Command; the U.S. Army, Europe; and the U.S. Air Forces, Europe; as well as discussed and documented their observations concerning the performance of the C-17 from a customer perspective. To determine the operational capabilities required and actually performed during the deployment, we interviewed C-17 pilots, maintainers, and loadmasters at the 437th Air Wing, Charleston Air Force Base, South Carolina; and conducted interviews and analyzed reports on C-17 deployment experience from representatives of the 621st Air Mobility Operations Group at Travis Air Force Base, California, who comprised and operated the Tanker Airlift Control Elements at Zagreb, Croatia, and Taszar, Hungary. The scope of our work did not include an assessment of the cost-effectiveness of using one airlift aircraft to provide intratheater airlift support versus another. However, we are currently assessing DOD’s intratheater airlift requirements and will address the cost-effectiveness issue in that report. To assess reported airlift activity by aircraft type during the deployment, we analyzed data contained in AMC’s Military Airlift Integrated Reporting System (MAIRS) and the AMC History System (AHS). AHS is a database of airlift sorties and is intended to replace MAIRS; however, AMC was using both systems at the time of the Joint Endeavor deployment. AMC representatives expressed concern about data accuracy and reliability of both databases. At the time of our review, AMC officials could not provide us with a statistical error rate or confidence level with which they, or we, could rely on data derived from these systems. However, AMC used this data to support some of its C-17 performance claims. Our assessment of those databases supports various AMC representatives’ concerns regarding data reliability and accuracy. Our review of data within these systems identified records containing questionable information. For example, 57 records indicated that aircraft took off but never landed, 11 records indicated sorties had negative flying hour lengths, and 438 records indicated that airlift aircraft flew missions into Bosnia and/or Hungary but carried no cargo or passengers. We presented our observations in a fact sheet to AMC officials who agreed that our analysis highlights some problems it needs to address. Further, they indicated these problems could be the result of data input errors, lack of proper review of data input in the theater, or a lack of system validation. AMC officials also said that some of our concerns may have resulted from problems with our analysis; however, AMC will need to perform a more detailed review of the data to make that determination. AMC officials are aware of inaccurate data and reliability problems associated with these systems and have had an outside contractor working to resolve them since March 1996. AMC said that the contractor underestimated the effort required and had revised its completion date to the end of October 1996. However, the contractor had not completed work by the time we prepared this report in December 1996. Although the accuracy of AMC’s data covering the activities of its airlift aircraft is questionable, we attempted to obtain an accurate picture of how the C-17 was used and how well it performed by contacting and interviewing Air Force operational, maintenance, and loadmaster personnel who were directly involved with operating the C-17 during the Joint Endeavor deployment. We also interviewed AMC’s customers in the European theater, including high-level Army and Air Force officials. In addition to working with AMC to resolve data issues, we have drafted a letter of inquiry for the Secretary of Defense regarding concerns we have about the potential effect of unreliable and/or inaccurate airlift performance and operational data. We are confident that, in general, we have a fairly accurate picture of how the C-17 was used and how it performed during the deployment, although AMC has not taken a formal position on the reliability and/or accuracy of the specific data in its databases. Since AMC had not performed a reliability assessment of these systems, and because it is not able to provide a statistical error rate or confidence level for data derived from these systems, all of this data must be qualified. We conducted our review from May to December 1996 in accordance with generally accepted government auditing standards. We provided a draft of this report to DOD and incorporated their comments where appropriate. The department’s written comments are included in appendix I. We are providing copies of this report to the appropriate House and Senate Committees and the Secretaries of Defense, the Air Force, and the Army. We will also provide copies to other interested parties upon request. If you or your staff have any questions concerning this report, please call me on (202) 512-5140. The major contributors to this report were William C. Meredith, John G. Wiethop, and David J. Henry. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed how the C-17 aircraft was used during the North Atlantic Treaty Organization (NATO) peacekeeping force deployment to Bosnia, focusing on: (1) how well it performed during the deployment; and (2) whether deployment transportation requirements included the need for airlift aircraft to perform any of the C-17's expected operational capabilities. GAO found that: (1) during Operation Joint Endeavor, the C-17 accomplished the airlift tasks required of it, as did other airlifters such as the C-141, the C-5, and the C-130; (2) the C-17 was used to satisfy the Army's immediate need for a high-capacity, short distance air transport to move troops, equipment, and outsize cargo from central Europe into the Bosnia area of operations; (3) the C-17 performed about 26 percent of the deployment airlift missions and carried about 44 percent of the cargo moved during the deployment; (4) the C-17 also performed a limited number of strategic airlift missions in which it delivered cargo from the continental United States to final destinations in Germany, Hungary, and Bosnia; (5) according to contractor reports, the C-17 achieved a mission capable rate of 86.2 percent during the December 1995 through February 1996 time frame compared to a required rate of 81.2 percent; (6) transportation needs of the Bosnia deployment did not offer the opportunity for any airlift aircraft to perform or demonstrate several operational roles and missions; and (7) consequently, the C-17 was not required to perform many tasks which it had trouble doing, or did not do, during operational testing.
Several key sources provide information on the number and characteristics of working children. The Current Population Survey (CPS), compiled monthly by BLS, is the primary source of information on the United States labor force. CPS provides nationally representative information on the number and characteristics of working children age 15 and older, including data on where children work, the types of jobs they hold, and how many hours a week they work. In addition, it provides demographic information on children such as age, race, ethnicity, and family income. Because the data have been collected for over 50 years, they can be used to show how the number and characteristics of working children have changed over time. Labor also compiles two additional sources of nationally representative data on working children. The first is the National Longitudinal Survey of Youth (NLSY). The most recent NLSY began in 1997 and is referred to as “NLSY97”; it contains data on one group of approximately 9,000 children born from 1980 to 1984. NLSY provides detailed information on the work experiences of this group of children over time and captures data not collected in CPS, such as information on children younger than 15 and in-depth information on children’s work habits, education, and personal lives. NLSY, however, cannot be used to show how the number and characteristics of all working children have changed over time because it only includes information on one group of children born from 1980 to 1984. The second set is the National Agricultural Workers Survey in which data are collected several times each year from a sample of crop agricultural workers. This survey provides data on the number and characteristics of children who work in migrant agriculture, their educational attainment, and their mobility. The information can be used to supplement data on children in CPS and NLSY but it is limited. Labor added questions to the survey in fiscal year 2000 to obtain additional data on children who work in agriculture but has not been able to obtain data on a sufficiently large number of children working in crop agriculture to provide information that is statistically reliable. The primary sources of data on children who are injured or killed as a result of work-related injuries come from BLS and NIOSH. BLS reports all work-related fatalities, including those for children, in its Census of Fatal Occupational Injuries published each year. BLS identifies these fatalities through death certificates and reports from state workers’ compensation agencies, medical examiners, the Occupational Safety and Health Administration, and the news media. BLS collects data on the number of nonfatal work-related injuries and illnesses in its Survey of Occupational Injuries and Illnesses from a sample of the injury records that employers in private industry are required to maintain. NIOSH collects data on work- related injuries in its National Electronic Injury Surveillance System from a sample of emergency room records. The employment of working children is generally covered by FLSA and its implementing regulations, which limit the types of jobs, number of hours, and times of day that children younger than 16 years of age can work. Generally, most children younger than age 14 are prohibited from working in nonagricultural employment other than casual free-lance jobs such as babysitting and delivering newspapers. Children who are 14 and 15 years old may work in many jobs in retail stores, restaurants, and gas stations. They may not, however, work in any job considered hazardous, including jobs in manufacturing, mining, construction, transportation, warehousing, communications, and public utilities. The provisions also prohibit 14- and 15-year-olds from working during school hours and limit the number of hours and times of day they can work. (See table 1.) FLSA also authorizes the Secretary of Labor to designate certain types of jobs and equipment as too hazardous for children under the age of 18. Once children reach age 16, they are only prohibited from working in jobs or with equipment covered by these Hazardous Occupations Orders, they are not limited as to the number of hours or times of day they can work. These hazardous jobs and types of equipment are specified in 17 Hazardous Occupations Orders originally issued between 1939 and 1963. (See table 2 for a list of the occupations determined to be hazardous by the Secretary of Labor.) In 2002, NIOSH completed a review of the Hazardous Occupations Orders for Labor. Its report, issued in July 2002, made several recommendations for changes to the orders, including establishing new hazardous orders prohibiting all children younger than age 18 from working in the construction industry and from working at a height of 6 feet or higher on ladders, scaffolds, trees, and other structures.Labor is in the process of reviewing the report and deciding what actions it will take in response to the recommendations. For jobs in agriculture, the child labor provisions are much less restrictive. Children of any age may work an unlimited number of hours (outside of school hours) in nonhazardous jobs, either on a farm owned by their parents or on a noncommercial farm with the written consent of their parents. Children aged 14 and 15 are allowed to work an unlimited number of hours in nonhazardous jobs outside of school hours without parental consent and, once they reach age 16, they are allowed to work in agricultural jobs deemed hazardous. The child labor provisions of FLSA do not cover all children. Children who work for employers whose annual gross volume of sales is less than $500,000 and whose work cannot be linked to interstate commerce are not covered under FLSA, although they may be covered under state child labor laws. In addition, children who are self-employed are not subject to the child labor provisions of FLSA. Furthermore, although children who work for their parents are prohibited from working in occupations and operating equipment listed in the Hazardous Occupations Orders, they are not subject to other restrictions of FLSA. When children reach age 18, they are no longer covered under the child labor provisions of FLSA. In 2001, several legislative proposals were submitted in the House of Representatives and the Senate that would strengthen the child labor provisions of FLSA. The proposals include the Children’s Act for Responsible Employment of 2001, which would, among other things, increase the maximum penalties for child labor violations and prohibit children aged 16 and 17 from working in hazardous occupations in agriculture. The Young American Workers’ Bill of Rights would amend the FLSA to require employers to obtain work permits for all children age 18 and under who are still in school, require Labor and the Census Bureau to compile data on child labor from the states, including data on injuries and illnesses, and add additional restrictions on child labor such as prohibiting children from making door-to-door sales for profit. (See app. II for a list of the legislative proposals introduced in the 107th Congress that would strengthen the child labor provisions of FLSA.) Labor’s Wage and Hour Division (WHD) is responsible for enforcing the child labor provisions of FLSA. WHD’s national office develops the goals and performance measures for Labor’s child labor compliance efforts and reports on the results of its efforts in annual performance plans. The national office is also responsible for providing guidance and training to WHD’s regional and district offices and for assessing the results of their child labor compliance efforts. Much of the responsibility for planning and executing these efforts, however, is left to the discretion of WHD’s five regional offices and the 50 district offices that report directly to the regions. The Child Labor Team Leader in the headquarters office is responsible for coordinating WHD’s child labor compliance efforts, including disseminating information and guidance to the regional and district offices, maintaining the section of the WHD Web site with information on child labor, maintaining the Field Operations Handbook for investigators, and overseeing special projects. WHD’s child labor compliance efforts comprise several strategies: enforcement, partnerships, education and outreach, and public awareness. Its public awareness strategy is designed to inform the general public about the child labor provisions of FLSA through activities such as press releases. WHD’s education and outreach activities—which WHD also refers to as “compliance assistance”—are specifically targeted to groups that can have an impact on youth employment, such as teachers, parents, and employers. Education and outreach activities include publishing and distributing materials such as bookmarks with information on the rules governing the employment of children under age 18, and maintaining a Web site with information on the child labor provisions of FLSA and state child labor laws. WHD’s enforcement actions include on-site investigations of employers and other activities designed to bring employers into compliance with the law. When WHD finds violations of the child labor provisions of FLSA during its investigations, it may assess penalties. The penalties for child labor violations depend on the severity of the violations, the number of times the violations occurred, and aggravating factors such as falsification of records and whether the employer has a record of previous child labor violations. The penalties range from $275 for a record-keeping violation to $11,000 for a violation involving a serious injury or death. Information on WHD’s investigations, violations, and penalties assessed is tracked in its investigations database–the Wage and Hour Investigative Support and Reporting Database, WHISARD. Over the past decade, according to data tabulated from CPS, the number and characteristics of working children have not changed. In 2001, working children were as sizable a part of the United States labor force as they were in 1990. Most children worked in a variety of occupations concentrated in a few select industries, primarily retail trade and services. The percentage of children who worked illegally, either because they worked in occupations prohibited under the law or more hours than allowed, also did not change. In 2001, as in 1990, minority children and children from families with annual incomes below $25,000 were more likely than other children to work illegally. However, because of limitations of the data available, we could not determine the full extent and nature of children’s employment in the United States, such as the number and characteristics of children younger than age 15 who work and the percentage of children younger than 15 who are employed illegally. In 2001, as many as 3.7 million children between the ages of 15 and 17 worked, about 30 percent of all children in this age range. (See fig. 1.) Between 1990 and 2001, children as a proportion of the total United States labor force, as well as the percentage of children who worked, remained relatively stable. The percentage of children who worked in summer months fluctuated over the decade from a high of 36 percent in 1990 to a low of 30 percent in 2001. Although most children work for an employer, in 2001, about 52,000 (2 percent) were self-employed and about 10,000 (less than 1 percent) performed unpaid labor in a family business. Throughout the decade, children primarily worked in retail trade, in businesses such as department stores, grocery stores, and restaurants. In 2001, as in 1990, about 60 percent of all working children were employed in this industry, mostly in eating and drinking places. (See fig. 2.) Children generally held jobs in sales occupations, such as running a cash register or clerking at a store, or in services, such as waiting tables or cleaning hotel rooms. Various data sets provide information about children who work in agriculture. According to CPS data tabulated by GAO, about 4 percent of all children who worked were employed in agriculture in 2001. Another data set, the National Agricultural Workers Survey, provides more detailed information on the characteristics of children who work in crop agriculture. According to the data in this survey, the characteristics of children in crop agriculture changed somewhat over the decade. Although most children working in crop agriculture throughout the decade were boys aged 16 and 17, the percentage of boys increased, as did the percentage of children who were foreign-born and newcomers to this country. Although most children working in agriculture are 16 years of age and older, the percentage of younger children (14- and 15-year-olds) who work in agriculture has increased, from 17 percent to 30 percent. In addition, both the percentage of children who entered the country illegally to work in agriculture and the percentage of those who were not accompanied by their parents or other family members rose over the decade. Children’s work has a decidedly seasonal nature. More children work in summer months when school is out of session than in school months. In 2001, 30 percent of all children aged 15 to 17 worked in summer months, compared to 23 percent who worked in school months. Not only do more children work in the summer, they also work more hours. In 2001, children worked an average of 21 hours a week in summer months, compared to 16 hours a week in school months. This is similar to the number of hours they worked in both summer and school months in 1990. Children also overwhelmingly worked part-time; 87 percent of all working children worked part-time in 2001. Although children worked about the same number of hours in 2001 as in 1990, their average hourly earnings increased by 10 percent. In 1990, children paid an hourly salary earned an average of $5.70 an hour; in 2001, the average hourly salary was $6.36. From 1990 to 2001, children’s average hourly earnings exceeded the minimum wage and the amount by which their earnings exceeded the minimum wage increased. By 2001, their average hourly earnings exceeded the minimum wage of $5.15 an hour by $1.21. The older a child is, the more likely he or she is to work and the more hours he or she is likely to work. These differences are even greater in the summer when more children work. For example, in summer months in 2001, 43 percent of all 17-year-olds and 33 percent of all 16-year-olds worked, while only 15 percent of all 15-year-olds worked. Moreover, in 2001, 17-year-olds worked an average of 23 hours a week in summer months, 5 more hours a week than 15-year-olds and 2 more hours a week than 16-year-olds. Children of different ages tend to work in different industries. Older children are more likely to work in the retail trade industry than younger children. For example, in summer months in 2001, nearly 60 percent of employed 17-year-olds worked in retail trade, whereas 38 percent of employed 15-year-olds worked in this industry. On the other hand, 15-year-olds are more likely than older children to work in agriculture, possibly because of the looser age restrictions that provide younger children with opportunities for employment in agriculture that they do not have in other industries. In addition to age, family income and race are also related to children’s employment. Children from families with lower incomes are less likely to work than those from higher income families and minority children are less likely to work than white children. About 17 percent of children in families with annual incomes below $25,000 a year worked in 2001, compared to 29 percent of children in families with incomes above $75,000 a year. In 2001, 15 percent of black children and 17 percent of Hispanic children worked, compared to about 30 percent of white children. Despite the fact that children from lower income families are less likely to work, when they do work, they tend to work more hours. Children in families with annual incomes below $25,000 worked an average of 21 hours a week in 2001, 5 more hours a week than children whose families had annual incomes of $75,000 or more. In addition, while all minority children were less likely to work than white children, Hispanic children who were employed worked more hours than other children. In 2001, Hispanic children worked an average of 24 hours a week, at least 5 more hours a week than other children. As in 1990, we estimated that as many as 4 percent of all 15- to 17-year-olds who worked in 2001 worked illegally, either because they worked more hours than allowed under the law or because they worked in prohibited hazardous occupations. Because the child labor provisions of FLSA for 15-year-olds are more restrictive, they are more likely to work illegally than 16- or 17-year-olds. For example, in 2001, over 21 percent of all employed 15-year-olds worked illegally in school months, compared to 1 percent of all 16- and 17-year-olds. Although the overall percentage of children who worked illegally in both summer months and school months remained constant from 1990 to 2001, the percentage of 15-year-olds who worked illegally in school months increased over the period. (See table 3.) Most of the children who worked illegally were 15-year-olds who worked more hours than allowed under the child labor provisions of FLSA. Of the 15-year-olds who worked illegally, nearly 80 percent worked an excessive number of hours—on average, 10 hours more than the maximum number of hours allowed. Because of the hours restrictions for 15-year-olds in school months, they were much more likely to work illegally in school months than in summer months. Children who worked illegally not only worked more than the allowed number of hours, they also worked in prohibited industries and occupations. Although most children worked in the retail trade industry during the past decade, a substantial percentage of the illegal employment was found in other industries (see fig. 3). For example, in 2001, although only 3 percent of all children worked in manufacturing, this industry accounted for 14 percent of all illegally employed children. Similarly, although only 3 percent of all working children worked in construction, the construction industry accounted for 16 percent of all illegally employed children. Moreover, although most children worked in sales and services occupations, children who were employed in prohibited occupations were most commonly employed illegally in transportation-related occupations. In 2001, according to our estimates, over 40 percent of all children who were employed illegally because they worked in prohibited occupations worked as truck drivers. (See fig. 4.) We determined that a child’s gender, race, and citizenship status were related to the likelihood that he or she worked in a prohibited occupation, but a child’s annual family income was not related. In 2001, boys were more likely than girls to work in prohibited occupations. In addition, black children were less likely than children of other races to work in prohibited occupations, and children who were not citizens were more likely than those who were citizens to work in prohibited occupations. There was no correlation, however, between family income and children who worked in hazardous occupations. In other words, children from families with lower annual incomes were not more or less likely to work in prohibited occupations than other children. Although children from families with lower incomes were not more likely to work in prohibited occupations, they were more likely to work more hours than allowed. In school months in 2001, 32 percent of all employed 15-year-olds with family incomes less than $25,000 a year worked more hours than allowed under the law. In contrast, only 9 percent of all employed 15-year-olds with family incomes over $75,000 a year worked more hours than allowed. In addition to income, race is also associated with the likelihood that a child will work more hours than allowed; Hispanic children are more likely to work too many hours than other children. In 2001, 42 percent of Hispanic 15-year-olds worked more hours than allowed during school months, compared to 16 percent of whites and 18 percent of blacks. The region of the country in which children live as well as the type of area in which they live (metropolitan or nonmetropolitan) are also associated with illegal employment. A higher proportion of employed 15-year-olds who lived in the South and West worked more hours than allowed than those who lived in the Northeast and Midwest. In school months in 2001, 15-year-olds in the South and West were almost 1.5 times more likely than their counterparts in the Northeast and Midwest to work more hours than allowed. Additionally, 15-year-olds who lived in metropolitan areas were 1.4 times more likely to work more hours than allowed under the law than those who lived in nonmetropolitan areas. Although data on children who work has been collected in CPS for 50 years, and therefore provides useful information on trends in child labor, limitations of the data affect our ability to accurately describe all children in the United States who work. One limitation is that data on children younger than age 15 are not collected. In 1989, Labor stopped collecting data on 14-year-olds in CPS, although these children are allowed under FLSA and its implementing regulations to work in many jobs.Because CPS does not gather data on 14-year-olds, the estimates presented in this report paint an incomplete picture of the employment patterns for younger children. Another limitation of CPS is the method used to capture information on working children. While some data are collected directly from children who work, most of the information in CPS on working children is provided by an adult member of the household, in most cases a parent. Because the adult answering the questions may not be aware of the full extent of the child’s activities, information about the child’s employment may be underreported or omitted, or the adult may incorrectly identify the industry, occupation, or hours worked. Another BLS survey, NLSY, collects information on children differently. In NLSY, children are directly asked by interviewers to describe their jobs as well as their activities at work. Because children are asked directly about their work and are asked more detailed questions about employment throughout the year, including using a calendar to prompt the child to account for each period of employment or unemployment during the year, the percentage of children who work reported in NLSY is much higher than in CPS. For example, NLSY97 showed that about 24 percent of all 15-year-olds worked in a particular week during the summer of 1996,while CPS data showed that 18 percent of 15-year-olds worked in that same period. In addition, NLSY collects data on younger children who work. For example, NLSY97 showed that 13 percent of all 14-year-olds worked for an employer in a particular week during the summer of 1996; CPS does not capture data on 14-year-olds who work. (See table 4.) NLSY97 also reported that as many as 44 percent of all 13-year-olds and 34 percent of all 12-year-olds received income from freelance jobs at some point in 1997. The number and characteristics of children who die each year as a result of a work-related injury have changed little over the past decade. From 1992 to 2000, the number of fatalities and the fatality rates for working children remained fairly constant. It is difficult, however, to determine whether the number of work-related injuries to children has changed, because the two primary sources of data on nonfatal injuries to working children—BLS and NIOSH—provide significantly different estimates of the number of children injured over the decade. Data from one of the two sources, however, indicates that the characteristics of children injured have changed little over the decade. The number and characteristics of work-related fatalities and fatality rates for working children remained relatively constant from 1992 to 2000. Each year, according to data collected by BLS on work-related fatal injuries,between 62 and 73 children died from injuries sustained while working, a total of 613 children over the 9-year period. In addition, the fatality rate for children aged 15 to 17—the number of deaths per 100,000 hours worked— was fairly constant, ranging from approximately 0.006 deaths per 100,000 hours worked in 1998 to about 0.010 deaths per 100,000 hours worked in 1992. (See fig. 5.) Demographic data for children who died as a result of a work-related injury show that most of the children killed were boys 16 years of age or older. Although girls were as likely as boys to work from 1992 to 2000, boys were almost eight times more likely to die as a result of a work- related injury than girls. In addition, although most of the children killed (about 60 percent) were aged 16 and 17, a substantial number (20 percent) were 13 years of age or younger. (See table 5.) Most of the children younger than age 15 who died as the result of a work- related injury were employed in agriculture, and about 50 percent of the children who died as a result of injuries incurred while working in agriculture were 14 years old or younger. Moreover, even though most children who died from a work-related injury were white (75 percent), Hispanic children had a fatality rate that was twice as high as the rate for white children. (See table 6.) Fatality data also show that certain industries and types of businesses pose a greater danger to working children. During the decade, over 40 percent of the children killed as a result of work-related injuries worked in agriculture, primarily crop production. Retail trade and construction accounted for 20 percent and 14 percent of all fatalities, respectively. (See fig. 6.) In addition, although children who worked in family businesses accounted for less than 1 percent of all working children, they accounted for about one-third of all fatalities. Moreover, children who worked for small employers—those with 10 employees or less—accounted for nearly two-thirds of the fatalities for which employer size was reported. About 90 percent of the children killed who worked for an agricultural employer for which establishment size was reported worked for employers with 10 or fewer employees. Data on fatality rates, however, show that the number of hours children work in each industry needs to be considered as well as the number of deaths. For example, although many of the children killed worked in retail trade, the fatality rate was 0.003 per 100,000 hours worked, much lower than the rate in construction, 0.050, in which 14 percent of the fatalities occurred. (See table 7.) For transportation and public utilities, the fatality rate was also relatively high, 0.027, although this industry only accounted for 3 percent of the fatalities. Children die as a result of many different types of work-related accidents. From 1992 to 2000, 44 percent of all children killed as a result of a work- related injury died as a result of transportation-related accidents, including highway collisions and nonhighway incidents, such as a fall from a moving vehicle. Other common events included being caught or compressed by equipment or being struck by a falling or flying object. (See fig. 7.) For example, in 2000, a 16-year-old boy who worked in a supermarket was crushed to death by a cardboard box compactor, and another 16-year-old boy was crushed when the forklift he was operating flipped over and landed on his chest. Overall, the characteristics of children who died as a result of work- related injuries and the characteristics of their fatalities remained relatively constant from 1992 to 2000, although transportation-related fatalities appeared to increase. In both 1992 and 2000, 60 percent of the children who died as a result of work-related injuries were 16 and 17 years old, and 75 percent of children killed for whom race was available were white. In addition, throughout the decade, the percentage of children killed who worked in agriculture generally ranged from 33 percent to 44 percent, although it rose to 58 percent in 1998. While most characteristics changed little over the decade, the number of children who died as the result of a work-related transportation incident increased from 37 percent in 1992 to 52 percent in 2000. The two primary sources of nationwide data on work-related injuries, one collected by BLS from employer records and the other collected by NIOSH from emergency room records, differ substantially in their estimates of the number of working children injured each year, the types of injuries they sustained, and the trends over time. For 1999, BLS reported that almost 13,000 children were injured on the job, while NIOSH estimated that over 80,000 children were injured on the job that year. BLS’s estimate came from records that employers are required to maintain for all work-related injuries serious enough to cause children to miss at least one day of work. NIOSH’s estimate was based on records of injuries treated in emergency rooms. In addition, different types of injuries were more prevalent among those reported by employers than those treated in emergency rooms. BLS data for 1992 to 2000 showed that sprains, strains, and tears were the most common injuries, while NIOSH data indicated that lacerations were the most common injury. The BLS and NIOSH data also, over time, indicate different trends in the numbers of work-related injuries to children. BLS data show that injuries have decreased by more than 40 percent, from about 22,000 in 1992 to about 13,000 in 1999. However, data from NIOSH suggest that injuries may have actually increased during the same period, from about 64,000 in 1992 to about 80,000 in 1999. (See fig. 8.) Due to limitations of both data sources, it is difficult to determine the true extent of injuries to children who work. Both sets of data underreport injuries to children. BLS captures only injuries serious enough to require at least one missed day of work; therefore, data on many injuries are not captured. Moreover, because most children work part-time, many children who are injured may not miss a full day of work because they were not scheduled to work after being injured. In addition, BLS does not collect data on children who are self-employed; those who work in federal, state, or local government; those who work for agricultural employers with fewer than 11 employees; and household workers. NIOSH data also underreport injuries because children who are injured may not inform hospital staff that their injuries are work-related and hospital staff may omit details of injuries that connect them to work. As a result, some injuries treated in emergency rooms may not be accurately counted as being work-related. Although BLS and NIOSH officials recognized that the data they collect underreport the number of work-related injuries to children, and recognized the limitations in their data, they emphasized that their data capture information on different types of injuries and can be used to complement each other. For example, certain types of injuries such as lacerations may be treated in emergency rooms but not necessitate one or more days away from work, while other types of injuries that may result in missed work days such as sprains and strains may be more commonly treated in physicians’ offices or outpatient clinics than in emergency rooms. Officials were not, however, able to explain why the trends indicated by the two sources differed so greatly. Although BLS data from employer records on work-related injuries to children are not complete, they provide information on the types of injuries sustained by children over the past decade. In general, the characteristics of occupational injuries to children and the characteristics of children who were injured as indicated by the BLS data changed little from 1992 to 2000. Demographic data for children injured show that 60 percent of the injuries occurred to boys, even though they accounted for only about half of all working children. Moreover, during this period, children aged 16 and over sustained the vast majority of all work-related injuries. BLS data also show that, throughout the decade, most children (84 percent) were injured while working in the two industries in which children are most likely to work—retail trade and services. Within these industries, in 2000, children who worked in eating and drinking places, food stores, general merchandise stores, and health services had the largest numbers of injuries. The injury rates per hours worked, however, were lower for the retail trade and services industries than the rates for children who worked in wholesale trade, transportation and public utilities, and manufacturing. However, as with fatalities, the likelihood of injury needs to be considered in addition to the actual number of injuries. We found that the risk of injury was highest in some industries with the smallest number of injuries to children. For example, although many fewer children who worked in the wholesale trade, transportation and public utilities, and manufacturing industries sustained a work-related injury than those who worked in retail trade and in services, children working in wholesale trade, transportation and public utilities, and manufacturing had higher rates of injury than those working in retail trade or services. (See table 8.) Injuries are caused by a variety of factors, but most frequently from coming into contact with an object or equipment. (See fig. 10.) For example, in 1998 a 17-year-old worker fractured his hand when a piece of metal slipped off a power-driven machine he was operating and landed on his hand. Other common accidents were overexertion, falls, and contact with hot objects. The causes of work-related nonfatal injuries were fairly constant over this period. The nature of work-related injuries to children also did not change from 1992 to 2000. Children injured on the job most frequently sustained strains and tears, cuts and lacerations, and bruises and contusions. (See fig. 11.) In contrast to the decrease in the number of injuries indicated by the BLS data, the severity of injuries to children (as defined by median number of days away from work) reported by BLS has remained relatively constant throughout the decade. About 65 percent of the injuries from 1992 to 2000 required children to miss 5 or fewer days of work, while 20 percent of the injuries required them to miss more than 10 days. (See fig. 12.) Labor has devoted substantial resources to ensuring compliance with the child labor provisions of FLSA over the past decade, and has continuously indicated that child labor is one of the agency’s highest priorities, but its efforts to improve employer compliance suffer from limitations that hamper its enforcement of the law. First, while Labor has recently begun to identify specific, measurable goals for the industries in which children are most likely to work, it continues to lack goals for those industries where children face significant risks. Second, it has not developed methods of measuring the success of many of its child labor compliance efforts. Third, Labor does not use all available data to plan its future efforts or measure its progress in improving employer compliance and ensuring that all working children are adequately protected under the law. Finally, Labor does not provide sufficient guidance and training to its regional and district offices on how to use their resources most effectively or help them consistently apply the child labor provisions of FLSA. Since 1990, Labor has developed better goals for increasing employer compliance with the child labor provisions of FLSA. Over the decade, its goals for improving compliance have moved from general statements about conducting investigations and education and outreach activities designed to increase child labor compliance to more specific goals that focus on improving compliance rates in certain low-wage industries such as agriculture and low-wage businesses such as nursing homes and residential care facilities—rates established through investigations of employers in those industries. Labor recently established specific goals for fiscal year 2003 for the industries in which children are most likely to work. WHD’s national goals in the early part of the decade focused on increasing the number of its child labor enforcement activities. For example, Labor’s goals for fiscal years 1990 and 1992 focused on its plans to dramatically increase the number of child labor investigations through nationwide efforts it called “Operation Child Watch.” These high-visibility efforts involved sending out hundreds of investigators for 1- and 2-day periods to conduct child labor investigations. As a result of these investigations, WHD found thousands of child labor violations and assessed millions of dollars in penalties. Operation Child Watch also gave WHD a sense of where it could expect to find child labor violations in the future. In fiscal year 2000, Labor considered setting specific goals for industries in which many children worked and were likely to be injured—the grocery and restaurant industries—but decided not to based on the results it received from a survey of employers. That year, WHD conducted a “survey” of employers designed to establish baseline compliance rates for employers in these industries. In the survey, WHD conducted investigations of statistically valid samples of supermarkets, full-service restaurants, and fast food restaurants that employ children and found that the overall compliance rates for these industries were 82 percent, 78 percent, and 70 percent, respectively. It also conducted investigations of employers with prior child labor violations and found that, although the compliance rates for two of the three industries were still lower than the overall compliance rates, the rates had improved from 0 percent previously to 72 percent of supermarkets with prior violations, 52 percent of full-service restaurants, and 72 percent of fast food restaurants. Because WHD considered these rates to be sufficiently high, particularly compared to other industries, it decided not to set goals for these industries for improving compliance with the child labor provisions of FLSA. WHD made this decision despite the conclusions reached in its analysis of the results of the survey. The analysis stated that, while child labor compliance rates were relatively higher in these three industries than overall FLSA compliance rates, it would be a “serious mistake” to compare the findings of the survey with compliance rates found in its surveys of other industries because all of the requirements of FLSA were not evaluated in the child labor compliance survey. In addition, the analysis stated that there were “still enormous child labor compliance issues that these industries need to address,” with at least 52,000 and as many as 220,000 children illegally employed in the three industries. The analysis also stated that the compliance rate in some segments of these industries was lower than the overall compliance rates. For example, it noted that although most children employed in full service restaurants, 89 percent, were employed in compliance with the child labor provisions of FLSA, the compliance rate was much lower, 53 percent, for 14- and 15-year-olds working in these restaurants. More recently, WHD developed specific goals for improving overall employer compliance with FLSA in some low-wage industries, including compliance with the child labor provisions. For example, WHD’s fiscal year 2002 performance plan established a goal of 75 percent compliance with the minimum wage, overtime, and child labor provisions of FLSA for employers in the long-term health care industry (nursing homes and residential care facilities). In September 2002, in its comments on a draft of this report, Labor reported that WHD had developed specific, measurable goals in its draft fiscal year 2003 annual performance plan. The draft plan includes goals for grocery stores, fast food restaurants, and full-service restaurants, industries in which children are most likely to work. WHD did not, however, establish goals for industries in which we found that children were likely to be employed illegally, such as manufacturing and construction, or for industries in which children have the highest rates of fatalities and nonfatal injuries, such as construction, wholesale trade, and transportation and public utilities. WHD has not developed adequate methods of measuring the success of all of its child labor compliance efforts. Over the decade, WHD’s measures of success have generally consisted of counting the number of its child labor compliance activities. For example, WHD measured the success of Operation Child Watch in the early 1990s by citing the large number of violations and penalties produced. More recently, WHD headquarters officials told us they look at trends in the number of child labor violations found each year, and noted that the numbers have dropped dramatically since the early 1990s. (See table 9.) It is not clear, however, what factors led to the decrease in the number of violations. It could have resulted from an increased rate of compliance among employers, the decrease in the number of child labor investigations conducted by WHD, or other factors. For example, since Operation Child Watch, the number of investigator hours devoted to child labor investigations has declined from a high of 11 percent to 7 percent in 2001, with a low of 5 percent in 1998. (See table 10.) While WHD headquarters officials use various factors to measure the success of their enforcement efforts, they told us that they do not know how to develop methods of measuring the success of their education and outreach activities. Officials told us they could tell their activities generate attention if WHD receives a lot of media inquires and that they count the number of times individuals access information from WHD’s Web site. However, WHD does not link its education and outreach activities with changes in employer behavior. While we recognize the inherent difficulties in assessing the impact on employer behavior of WHD’s education and outreach activities, particularly the difficulty of establishing a cause and effect relationship between changes in compliance and WHD’s activities, such measurement is important in distributing WHD’s limited resources. Moreover, we found that some district offices were trying to develop such methods. For example, officials at one district office told us that they felt it was important to obtain at least an indication of success by conducting investigations of employers after their education and outreach activities. They also said they were working with researchers at the University of Tennessee to develop methods of measuring the success of all of their child labor compliance activities. Despite having difficulty in developing methods of measuring the effectiveness of its education and outreach activities, WHD headquarters officials told us that they devote a lot of their resources—both at the national and local level— to education and outreach and other compliance assistance activities. Since Operation Child Watch, most of Labor’s national child labor campaigns have focused on providing education and outreach and other compliance assistance to employers and on increasing public awareness of the child labor provisions of FLSA. For example, from 1996 to 2001, WHD conducted its “Work Safe This Summer” initiative designed to reach children beginning summer jobs. As part of this national initiative, WHD provided pamphlets and posters with information on the child labor provisions of FLSA, a Web site with safety tips and information about the law, and a public service announcement displayed on movie screens in 22 states and printed on 37 million shopping bags distributed by K-Mart. Labor’s newest national child labor initiative, “YouthRules!” announced by the Secretary in May 2002, is aimed at “educating young workers, parents and employers about workplace hazards, proper hours of work and ensuring that students’ education should be everyone’s first priority.” The initiative includes partnerships that WHD has formed with all levels of government as well as businesses, unions, and advocacy groups to provide training and educational materials, such as bookmarks that list some of the rules regarding jobs in which children ages 14 through 17 can work. (See fig. 13.) WHD does not use all of the data available to plan its future child labor compliance efforts or assess their success. WHD does not routinely use data from BLS to identify the industries and occupations in which children work, or data from BLS and NIOSH to determine the industries or occupations in which children are most likely to sustain work-related injuries. WHD also does not use information from its investigations database or reports on local child labor initiatives to assess the results of the regional and district offices’ child labor compliance efforts or hold them accountable for ensuring that the child labor provisions of FLSA are enforced adequately nationwide. WHD does not routinely obtain information from BLS on the industries and occupations in which children work—legally or illegally—or those in which they are most likely to be injured or killed to use in targeting its compliance efforts. As a result, WHD may not target sufficient resources to these areas. For example, we estimated that a significant proportion of illegally employed children worked in construction and manufacturing— industries in which WHD targets few of its child labor compliance efforts. We also found that children were more likely to be injured or killed while working in several industries in which WHD conducts few child labor investigations—construction, wholesale trade, and transportation and public utilities. WHD officials told us that they do not use data from BLS on the industries and occupations in which children work because the information is not easily accessible and because the data do not provide information at the local level that could be used to help plan local compliance efforts. However, CPS data is available at the regional level and we developed state-level information by combining several years of data. For example, by combining CPS data for 4 years (1998 to 2001) and comparing it to the provisions of FLSA, we estimated the number of children illegally employed in each state. (See app. I for detailed information on these estimates.) WHD also does not utilize information from its investigations database to determine what types of strategies are most successful in detecting child labor violations in its investigations of employers. WHD conducts child labor investigations either as a result of a receiving a complaint about a possible child labor violation or from a decision by a WHD office to conduct investigations of employers that focus specifically on children they employ (“self-initiated” investigations). WHD officials told us that, although they review every complaint they receive about a possible child labor violation, they do not receive many complaints about child labor.They also said that WHD’s national office does not require all district offices to conduct self-initiated child labor investigations. In fact, since Operation Child Watch, the only national initiative that required all district offices to conduct child labor investigations was the child labor compliance survey WHD conducted of the grocery and restaurant industries in fiscal year 2000. WHD officials also told us that it is not necessary for the national office to require all of its district offices to conduct self-initiated child labor investigations because investigators look for child labor violations during all investigations, not just those that focus on child labor. However, in reviewing data we requested from WHD’s investigations database, we found that, although self-initiated child labor investigations accounted for only 5 percent of all investigations completed in fiscal year 2001, these investigations accounted for almost 40 percent of all cases in which child labor violations were found. (See table 11.) In other words, WHD would have missed a large proportion of the cases with child labor violations if some district offices had not conducted self- initiated child labor investigations in the absence of a national directive to do so. WHD headquarters officials also do not use data from its investigations database to ensure that district offices nationwide are providing an adequate level of protection to all children who work by ensuring that all district offices nationwide conduct at least some minimum number of child labor investigations. From the data we obtained for fiscal year 2001, we found that the number of investigations conducted by each district office and by WHD’s five regions varied widely and that a few district offices accounted for a large number of the child labor investigations and cases in which child labor violations were found. For example, in fiscal year 2001, we found that the total number of child labor investigations (both those initiated by complaints and self-initiated investigations) conducted by the district offices in each region varied widely, from a total of 942 child labor investigations in the northeast region to a total of 227 investigations in the southwest region. We also found that the number of investigations conducted by each of the district offices varied widely; 8 of the 50 district offices accounted for 43 percent of all child labor investigations completed in fiscal year 2001, and 9 of the 50 district offices completed fewer than 10 self-initiated child labor investigations in fiscal year 2001. WHD headquarters officials were not able to explain these regional and district variances. Although WHD’s regional and district offices conduct many local child labor initiatives, WHD headquarters officials do not use the quarterly reports prepared by these offices on their local initiatives to ensure that, nationwide, local child labor initiatives provide adequate protection for all children who work. WHD headquarters officials told us that all regional and district offices should have some local child labor initiatives, unless they have a reasonable justification not to, such as other higher-priority compliance efforts, and that, in recent years, every district office has had some local child labor initiatives. However, the reports we reviewed for fiscal years 1999 through 2002 showed that, in some years, many districts had no child labor initiatives and the number of child labor initiatives varied significantly from one region to another. For example, in fiscal year 2000, almost all of the district offices in the northeast region had at least one child labor initiative and two regions had regionwide initiatives that required all of their district offices to conduct child labor investigations. Some of the district offices in the southwest region, however, had no local child labor initiatives planned for fiscal years 1999 through 2002, although our estimates of illegally employed children showed greater numbers of children in the South to be illegally employed than in the Northeast. While WHD leaves many of the decisions about how to deploy enforcement resources to its regional and district offices, it does not ensure that these offices receive sufficient guidance and training to properly target or carry out their child labor compliance efforts. In addition, some regional and district offices do not use data on previous child labor violations to target their compliance efforts because WHD has not provided all staff with adequate training on how to obtain reports from its investigations database. WHD’s national office does not provide regional and district offices with adequate guidance on how to target their child labor compliance efforts. WHD’s national office has not provided criteria to the regional and district offices to use in targeting their child labor compliance efforts. Instead, district offices generally rely on the anecdotal knowledge they have gained from previous investigations to plan their future compliance efforts. WHD’s national office also does not provide regional or district offices with data from BLS on the industries and occupations in which children work, legally or illegally, and are most likely to be injured. Officials in one district office we visited told us that, because of their past enforcement efforts, illegal child labor was not a problem in their area and thus, they did not devote a lot of their enforcement resources to targeting child labor violations. However, our analysis showed that a large percentage of children working in this state worked in those industries where children were most likely to be injured. In addition, our estimates of the number of illegally employed children in each state showed that a larger proportion of 15-year-olds in that state were likely to be employed illegally than those in any other state we visited. While these estimates do not prove that illegal child labor was a problem in the area of the state covered by that district office, it is an indication that potential violations exist undetected. WHD’s national office also does not provide specific guidance to regional and district offices on when to assess penalties for child labor violations. As a result, district office practices for penalty assessment and collection vary significantly. In reviewing investigations with child labor violations completed in fiscal years 2000 and 2001 at the district offices we visited, we found that district officials sometimes reduced the penalties for child labor violations or did not assess any penalties for child labor violations while in other, similar cases, they assessed penalties. For example, one district office we visited generally did not waive penalties for child labor violations it found, but another district office waived many of the penalties for the child labor violations it found, including a violation of a Hazardous Occupations Order involving a child who was working with dangerous equipment. We also found that the average percentage of the penalties collected varied significantly among the district offices we visited—from a low of 49 percent of the original amount assessed to a high of 82 percent for investigations completed in fiscal years 2000 and 2001. Finally, WHD does provide adequate training to all of its regional and district offices on how to obtain reports from the investigations database to use in targeting their child labor compliance efforts. Two of the five regional child labor coordinators and officials at two of the district offices we visited indicated that they had not been trained on how to obtain reports from WHD’s database. Officials at the district offices we visited told us they had staff members with the skills needed to obtain reports from the investigations database that enabled them to identify where they had found child labor violations in the past to use in targeting their future compliance efforts. Officials at two of these offices, however, said that they had only one person with these skills and that, if they lost this person, they would not be able to obtain the reports they need from the investigations database. By allowing and encouraging children to work, the nation acknowledges that children can derive many benefits from working, such as independence, confidence, and responsibility. However, work may also have negative consequences for children’s physical, emotional, and educational development. In recognition of this, Labor devotes significant attention to enforcing the child labor provisions of FLSA and ensuring safe and productive work experiences for children. Over the last decade, its efforts have ranged from conducting investigations to carrying out nationwide education and outreach campaigns. Despite the importance that WHD places on ensuring the safety of children who work, it faces difficulty in showing that its efforts have made a difference in improving the working conditions of children. WHD’s ability to effectively enforce the child labor provisions of FLSA is dependent, in part, upon the establishment of a sound performance management system that provides program managers with the type of information they need to evaluate the effectiveness of their child labor compliance efforts and ensure that their limited resources are used in the most effective manner. WHD has taken some steps towards developing such a system, as reflected in the program goals it has established and refined in recent years. However, WHD continues to lack goals for industries that have high rates of injuries and fatalities. Further, WHD’s performance management system does not allow managers to fully assess the impact of its child labor compliance efforts. As a result, WHD lacks a sound basis for determining, among other things, the extent to which it should devote resources to child labor investigations versus its education and outreach and other compliance assistance activities. Its resource allocation decisions are also hindered because WHD does not routinely use all of the data available from BLS and NIOSH to target its child labor compliance efforts to the places that are most dangerous for children and in which they are most likely to work illegally. Similarly, by not fully utilizing information from its own investigations database to evaluate the child labor compliance efforts of its regional and district offices, WHD is not able to target its local efforts most effectively or hold local offices accountable at the national level for enforcing the child labor provisions of FLSA and ensuring that all children who work are adequately protected. Finally, even if WHD were to use all existing data as effectively as possible, it would still not be able to use the data to better target its child labor compliance efforts for children younger than 15 because BLS’s CPS data do not include information on 14-year-olds who work and NLSY97 does not have information on children born after 1984. We recognize, however, that collecting these data will require additional resources. Regional and district offices also need sufficient guidance and training to effectively carry out their mission. Because WHD’s national office leaves many decisions about the appropriate amount of resources to devote to child labor compliance efforts to its regional and district offices, but does not provide them with criteria for targeting their efforts or data on industries and occupations in which children work and are most likely to be injured, they may not be using their limited resources as efficiently as possible. Additionally, because district offices do not receive specific guidance from WHD’s national office about when to assess penalties for child labor violations, nationwide, they do not consistently assess penalties for these violations. Finally, because staff in WHD’s district offices do not receive adequate training on how to use WHD’s investigations database, they are not able to effectively use all available information to help them assess their past child labor compliance efforts and better target their future efforts. To strengthen WHD’s ability to evaluate the effectiveness of its child labor compliance efforts and ensure that limited resources are used in the most effective manner, the Secretary of Labor should establish additional specific, measurable goals for WHD’s child labor compliance efforts for industries in which children are most likely to be injured or killed; develop methods of measuring the success of WHD’s child labor compliance efforts, including its education and outreach activities; routinely obtain data from BLS and NIOSH on the industries, occupations, and locations in which children work—both legally and illegally—and sustain work-related injuries and use them to target WHD’s child labor compliance efforts; routinely obtain and review data from its investigations database on the number and types of investigations conducted by WHD’s district offices and child labor violations found and use these data to (1) ensure that WHD’s resources are deployed in the most effective manner and (2) hold regional and local offices accountable at the national level for ensuring that all children nationwide are protected under the child labor provisions of FLSA; consider enhancing the data collected on children who work by expanding CPS to include 14-year olds or beginning additional cohorts of NLSY at regular intervals, such as every five years. To provide WHD’s regional and district offices with the information they need to properly plan and implement their child labor compliance efforts, the Secretary of Labor should provide better guidance to WHD’s regional and district offices on how to improve employer compliance and specific guidance on when to assess penalties for child labor violations. provide training to all WHD staff on how to obtain information from the investigations database. We provided a draft of this report to Labor for comment. Overall, Labor disagreed with many of the conclusions and recommendations in the draft. Labor’s comments and our specific responses are included in app. IV. Labor disagreed with our recommendation to establish specific, measurable goals for industries in which most children work and in which they are most likely to be injured or killed. The agency noted that WHD’s draft fiscal year 2003 performance plan includes specific goals for the industries that employ the most children. One goal, for example, is to decrease the percentage of grocery stores and full-service restaurants with repeat child labor violations by 2 percent. We think these are positive steps and have revised our report to reflect these efforts. However, the draft performance plan does not include goals for other industries because Labor does not believe there are sufficient numbers of children working in these industries to justify setting such goals. We believe that this approach reduces attention to some of the industries that pose significant risks to children. For example, there are no specific goals for construction, which has high fatality and injury rates. Labor also disagreed with the conclusions supporting our recommendation to develop methods of measuring the success of its child labor compliance efforts. In this regard, Labor noted that it had recently decided to repeat its compliance survey of grocery stores, full-service restaurants, and fast food restaurants in fiscal year 2004. While we are encouraged by this step, Labor has not developed methods of measuring the success of other child labor compliance efforts, particularly its education and outreach activities, which, admittedly, are difficult to evaluate. In addition, it has not conducted child labor compliance surveys of other industries. Finally, Labor took issue with our recommendation to routinely obtain data from BLS and NIOSH on the industries, occupations, and locations in which children work and sustain injuries and use them to target WHD’s child labor compliance efforts. Labor maintains that it does routinely use such data. However, we found that WHD has no procedures for routinely collecting data from BLS and NIOSH to plan its child labor enforcement efforts. Moreover, the district offices we visited confirmed that they do not receive this information to use in planning their local child labor efforts. Labor agreed with the remainder of our recommendations or agreed to take action on them. For example, it agreed to obtain and review data from the WHISARD database to better hold local offices accountable, explore the possibilities of collecting additional data through CPS or NLSY, provide better guidance to WHD’s regional and district offices concerning penalty assessment, and provide training to WHD staff on how to obtain information from the investigations database. Labor also provided technical comments and clarifications, which we incorporated in the report as appropriate. We are sending copies of this report to the Secretary of Labor, the Wage and Hour Administrator, the Commissioner of the Bureau of Labor Statistics, and other interested parties. Copies will be made available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report please contact me at (202) 512-7215 or Lori Rectanus at (202) 512-9847. Other major contributors are listed in appendix. V. This appendix discusses in more detail our scope and methodology for (1) estimating the number of working children in the United States who are employed illegally, (2) using data from the National Longitudinal Survey of Youth (NLSY) to estimate the number of 12- to 14-year-olds in the United States who work, (3) computing fatality and injury rates for working children, and (4) using data on occupational injuries to calculate relative risk. Most of the estimates in the report were calculated using data obtained from samples and, therefore, have sampling errors associated with them. All differences included in the report were tested for statistical significance at the 0.05 level. To estimate the number of children who are employed illegally in the United States—children employed in violation of the child labor provisions of the Fair Labor Standards Act (FLSA)—we compared data obtained from the Bureau of Labor Statistics’s (BLS) Current Population Survey (CPS) on children in the United States who work to the child labor provisions of FLSA. Because CPS does not capture data on children in the United States who work in occupations that are illegal under criminal statutes, such as drug dealing and prostitution, we could not include them in our estimates. In developing our estimates, we worked with Douglas Kruse, Ph.D., a professor at Rutgers University’s School of Management and Labor Relations and a research associate with the National Bureau of Economic Research who updated our initial estimate of illegal child labor in the 1991 GAO report on child labor using CPS data from January 1995 to September 1997. We compared CPS data for 1990, 1996, and 2001 to the child labor provisions of FLSA by defining the base population and identifying illegal employment by occupation and hours. We further refined this analysis through the use of logistic regression for illegal occupations and cross- tabulations for hours. We used data from 36 monthly surveys from CPS for the following years: 1990, 1996 and 2001. For each year, we created data sets with children aged 15 to 17. Each year held roughly 70,000 observations. We defined the base population by using the child labor provisions of FLSA as a guide. Because FLSA does not apply to children who are self-employed and the provisions are much less stringent for children who work in agriculture than for those who work in nonagricultural jobs, we eliminated children who were self-employed and those who worked in agriculture from the data set. Key questions concerning child employment in a family business were not added to CPS until 1994. Therefore, in order to compare results for 2001 with results for 1990, we included children who worked in a family business in the 2001 base population, even though these children are not covered by FLSA. (See app. III for information on the number of children in the United States who worked in a family business in 2001.) To obtain numbers of children who worked illegally, we averaged the results of the nine school months to obtain the number of children who worked illegally in school months (defined in our report as September through May) and the three summer months to obtain the number of children who worked illegally in summer months (defined as June through August). To simplify the discussion on racial differences, we combined race and ethnicity categories. CPS treats ethnicity separately from race. As a result, a child can be identified in CPS as both “white” and “Hispanic,” or as both “black” and “Hispanic.” To separately determine the effect of illegal employment on whites, blacks, and Hispanics, we coded children whose race was identified as “white” or “black” and whose ethnicity was identified as “Hispanic,” as “Hispanic” only. For children whose race was identified as “black” or “white,” and whose ethnicity was identified as “non-Hispanic,” we included them in the category of race identified (i.e., as “black” or “white” only). This resulted in four racial categories: white, black, Hispanic, and other. For 1996 and 2001, “Other” included Asians and Native Americans. For 1990, “Other” included Asians, Native Americans and “Other,” which contained anyone who identified a race that did not fall within the other categories listed. Hazardous Occupations, 15- to 17-year-olds Using methods developed in our previous report and refined by Douglas Kruse, we defined a list of 100 occupational codes that correspond to activities prohibited by the Hazardous Occupations Orders. Any child in the base population with one of these codes was included in our estimates of children employed illegally. We used a similar method to identify occupations specifically prohibited for 15-year-olds. In this instance, 216 occupational codes matched the descriptions of activities prohibited for 15-year-olds to children under 16. Any child identified in the base population with one of these codes was included in our estimates of children employed illegally. Due to the small sample sizes of children identified in these groups, cross tabulations were not statistically significant at the 0.05 level. In order to obtain information on the likelihood of children who were most likely to work in illegal occupations, we conducted a logistic regression. To determine demographic differences in the likelihood of working illegally, the following variables were included in the model: sex, race, urbanicity (metropolitan or non-metropolitan area), income, citizenship, and region. Differences that were found to be statistically significant are indicated in table 12 below. We used the CPS variable that describes the number of hours actually worked each week to determine whether 15-year-olds worked more hours than allowed each week under FLSA. Children who worked over 18 hours in a week in school months, from September to May, were included in our estimates of illegal employment. Children who worked over 40 hours a week in summer months, June through August, were also included in our estimates. We further refined this analysis of the number of 15-year-olds who worked more hours than allowed under the law by conducting cross tabulations for race, industry, occupation, sex, geographic region, metropolitan area, and income. To estimate levels of illegal employment in each region, we used data for the four regions defined in CPS: Midwest, Northeast, South, and West. The states included in each region are Midwest: Illinois, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin; Northeast: Connecticut, Maine, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont; South: Alabama, Arkansas, Delaware, District of Columbia, Florida, Georgia, Kentucky, Louisiana, Maryland, Mississippi, North Carolina, Oklahoma, South Carolina, Tennessee, Texas, Virginia, and West Virginia; and West: Alaska, Arizona, California, Colorado, Hawaii, Idaho, Montana, Nevada, New Mexico, Oregon, Utah, Washington, and Wyoming. To estimate illegal employment in each state, we combined CPS data for 4 years (48 monthly surveys) from 1998 to 2001. Because observations at the state level were relatively small, combining data for 4 years increased the sample size and allowed us to make estimates about the state population. The specific nature of child labor laws makes it difficult to reconcile them with generalized data sources. FLSA and implementing regulations and guidance describe activities that children are prohibited from performing. Although CPS data on occupations are fairly specific, they do not list the actual activities children perform on the job. Therefore, it is not possible to determine whether each child who works in a prohibited occupation category is actually performing prohibited activities. As a result, our estimates of illegal employment are both under- and overstated. The estimates are understated—probably by a significant amount—because many activities prohibited, particularly the many types of equipment children are prohibited from using detailed in the Hazardous Occupations Orders, cannot be gleaned by reviewing the occupational categories in CPS. For example, all children younger than 18 are prohibited from using meat slicers. However, although CPS shows that many children work in “food services,” it does not indicate how many of them work with meat slicers. The estimates are also overstated because they include some activities that are allowed in occupations that are prohibited. For example, because work as a public messenger is expressly prohibited for15-year- olds, we included children who worked in the occupation category “mail carrier” in our estimates of illegal employment. However, these children could have been delivering mail within an office or sorting mail in a mailroom, activities that are allowed. It was not, however, possible for us to determine the extent to which our estimates of illegal employment are under- or overstated as result of these limitations. In addition, our estimates of the number of 15-year-olds are understated because CPS does not capture information necessary to determine the number of 15-year-olds who work more hours than allowed on a daily basis or at prohibited times of day. CPS does not collect information on either the length of the workday, nor its starting and ending times. As with prohibited occupations, however, it was not possible to determine the extent to which our estimates are understated as a result of these limitations. However, we believe that the amount of this understatement is large because many of the child labor violations found by WHD in its investigations of employers are violations of the restrictions on number of hours and times of day that children are allowed to work. Finally, our estimates are overstated to the extent that they include children whose employment is not covered under FLSA, either because their employers do not meet the threshold for enterprise coverage under FLSA, or because the child does not work in interstate commerce. CPS does not collect information on the characteristics of employers for which children work, such as sales volume or other proxies for annual dollar volume of sales, or information that could be used to determine whether children work in interstate commerce. As with the other limitations, however, the extent to which our estimates are overstated because not all children work in employment covered under FLSA cannot be determined. We used data from NLSY97, the survey of children born from 1980 to 1984, to estimate the number of 12- to 14-year-olds who work. In NLSY, children are asked whether or not they worked in each of the 52 weeks in a given year. We coded children noted in NLSY97 as being born in 1982 as 14 years old in 1996 and children born in 1983 as 14 years old in 1997. To determine employment for these children in 1996 and 1997, we identified those weeks that contained the 12th day of the month since the reference period for NLSY is the 12th week of a particular month. We determined the frequency of employed children in each of the 16 weeks and averaged the result over the total number of months in each period. We performed these calculations for three periods in 1996 and one period in 1997 as shown in table 4 in this report. Most employment questions in NLSY deal with events that take place after the child’s 14th birthday. As a result, we used a different method to estimate employment for children younger than 14 years of age. One of the variables in NLSY97 identified children who indicated that they had received income from a job in the previous year (1996). We coded children born in 1983 and 1984 as being 13 years of age and 12 years of age, respectively. We coded these children who said they had received income from a job as having been employed at some point in the previous year. To calculate fatality and injury rates for children, we used data from three BLS sources: (1) data on fatal work-related injuries from the Census of Fatal Occupational Injuries, (2) data on nonfatal injuries from the Survey of Occupational Injuries and Illnesses, and (3) data on the hours children work from CPS. We used the Census of Fatal Occupational Injuries and Survey of Occupational Injuries and Illnesses to identify the number and characteristics of 15- to 17-year-olds who died as a result of a work-related injury or sustained a nonfatal injury from 1992 to 2000. Because the number of fatalities in each year was relatively low, we aggregated the data over the decade to provide statistically reliable information. We did not aggregate the data on nonfatal injuries over the decade because the number of injuries reported was large. We calculated fatality and injury rates by using the total number of hours children worked as indicated in CPS. We, and some other researchers, consider this method more appropriate than using the number of children who work to compute fatality and injury rates because many children work part-time. Using the number of hours children work provides a better measure of exposure to injury and leads to a more accurate assessment of risk than using the number of children. We used data from 108 monthly surveys to obtain the total number of hours worked by 15- to 17-year-olds between 1992 and 2000. In computing the rates for nonfatal injuries, we excluded children who worked for the government or were self-employed because they are excluded from the data for nonfatal injuries. We totaled the hours worked each week by all children for each year and totaled the hours worked each week for children of different sexes, races, and those who worked in certain industries. Because the data sets for fatal and nonfatal injuries use different industry groupings than CPS, we combined industries so that they would reflect the same industry groupings. Because we were not able to obtain SOII data that only included 15- to 17-year-olds by industry, we calculated industry rates by dividing the total number of injuries to children under 18 by the total hours worked by 15- to 17-year-olds in each industry. Since CPS data do not include hours worked by children younger than 15, we were unable to include the hours worked by these children in our calculations. However, since children under 15 make up less than 1 percent of all injuries to children, this discrepancy did not affect our estimates. To calculate the fatality rate for each category, we divided the total number of injuries by the total number of hours children worked and multiplied the result by 100,000. This figure represents the number of injuries per hundred thousand hours worked. Several legislative proposals were introduced in the 107th Congress to strengthen the child labor provisions of FLSA. In July 2002, the Senate passed the Traveling Sales Crew Protection Act, legislation designed to ensure that child employees of traveling sales crews are protected under FLSA. The law includes provisions that prohibit children under age 18 from working in traveling sales work where they remain away from home for more than 24 hours. Other legislative proposals introduced in the 107th Congress are listed in table 13 below. The number and characteristics of children who worked from 1990 to 2001 as tabulated from the Current Population Survey (CPS) data we obtained from the Bureau of Labor Statistics (BLS) are detailed in the tables below. 1. Our draft report acknowledged the difficulty of estimating the number of children employed in violation of the child labor provisions of FLSA and the limitations of using CPS data to provide such estimates. While recognizing these limitations, we continue to believe that our analysis provides the best estimates that can be made with available data. In fact, we used the same methods as those used to develop estimates cited by BLS and WHD in several of their publications. For example, in BLS’s November 2000 Report on the Youth Labor Force and WHD’s May 2000 materials prepared for its “Spring Into Safety” child labor campaign, they cited estimates of the number of children illegally employed prepared by Douglas Kruse, the researcher with whom we worked in developing our estimates. WHD stated that his 1997 study provided “the best currently available estimate of the number of children employed in violation of Federal or State child labor laws.” 2. Labor cites the drop in workplace injuries indicated by the BLS data from the Survey of Occupational Injuries and Illnesses as evidence that “the workplace is becoming safer for young workers,” but ignores NIOSH data from emergency room records that indicates that workplace injuries to children are on the rise. 3. Although we highlighted many of Labor’s efforts to ensure compliance with the child labor provisions of FLSA, our report was not intended to describe all of the agency’s child labor enforcement efforts over the past ten years but rather, as requested, to determine how well Labor enforces these provisions and to make recommendations for improvement. 4. WHD is correct in its statement that the number of investigator hours devoted to child labor enforcement in fiscal year 2001, 74,000 hours, represents the largest investment of investigator time by WHD in the last 5 years. However, our report, which focuses on WHD’s efforts since 1990, shows that this is lower than the resources dedicated to child labor enforcement in the first half of the decade. For example, in fiscal years 1990, 1991, and 1992, WHD investigators devoted 143,000 hours, 83,000 hours, and 105,000 hours, respectively, to child labor enforcement. 5. We did not provide information on the number of hours spent by WHD investigators on activities other than investigations because, although the investigations database, WHISARD, can track this information, we found, and WHD officials confirmed, that it did not contain complete information on the time investigators spend on non-enforcement activities such as education and outreach. 6. We continue to believe that the trends in the number of child labor violations found and the amount of penalties assessed—including the fact that the numbers have declined significantly since 1990—are not valid indicators of WHD’s commitment to child labor compliance, nor are they evidence of the success of its efforts to ensure compliance with the child labor provisions of FLSA. As noted in our report, because we do not know what factors led to the changes in the number of violations, it is unclear whether the increase in the number of violations found by WHD in fiscal year 2002 compared to fiscal year 2001 indicates a growing problem with child labor, improvements in WHD’s efforts to identify violations, or other factors. 7. This statement is misleading. Our draft report recognized that, in 2002, the maximum penalty for a child labor violation was raised from $10,000 to $11,000. We did not recognize WHD’s efforts to publish a final rule on the 1999 regulatory proposal to update some of the hazardous occupations orders or issue new regulations because these actions have not been completed. 8. This statement is also misleading. Our draft report acknowledged that NIOSH completed a review of the hazardous occupations orders for Labor, which made several recommendations for changes to the orders. We also noted that Labor was in the process of reviewing the report and deciding the actions it would take in response to NIOSH’s recommendations. 9. We believe that our report provides a comprehensive and balanced picture of the effectiveness of Labor’s efforts to ensure compliance with the child labor provisions of FLSA and contains important information on how these efforts could be improved. 10. The information in our draft report was based on our review of WHD’s fiscal year 2001 and 2002 annual performance plans. Neither of these plans contained specific, measurable goals for the industries in which most children work or in which children are most likely to sustain a serious injury. However, WHD’s draft performance plan for fiscal year 2003, which we recently obtained, appears to be a step in the right direction in terms of setting more specific, measurable goals, although it does not contain goals for some of the industries with high injury and fatality rates. Accordingly, we have revised the information in our report and the associated recommendations. 11. While the referenced employer compliance survey is an important step in setting goals for improving employer compliance, it is only one of the steps required. The baseline data become the starting point for measuring progress in improving compliance. However, after completing the survey in fiscal year 2000, WHD chose not to set specific goals for improving compliance in the grocery and restaurant industries in its fiscal year 2001 and 2002 performance plans. It made this decision even though the rate of non-compliance for fast food restaurants was 30 percent, and the non-compliance rate for full service restaurants with previous child labor violations was almost 50 percent. 12. We do not believe that the most important indicator of compliance is the percentage of children who are employed illegally. Although we agree it is an indicator that should be reviewed, a few companies that employ large numbers of children can have a disproportionate effect on the numbers. Therefore, we believe the most important indicator of compliance is the percentage of employers who are in compliance with the child labor provisions of FLSA, particularly because employers are responsible for maintaining compliance with the provisions of the law. 13. As noted previously, prior to fiscal year 2003, WHD’s annual performance plans did not contain specific, measurable goals for improving compliance with the child labor provisions of FLSA for many of the industries in which most children work or are likely to sustain work-related injuries. See GAO comment 10. 14. As Labor’s quotation indicates, the goals in WHD’s fiscal year 2002 performance plan did not establish specific goals for the industries in which most children work or are likely to sustain work-related injuries. See GAO comment 10. 15. As noted previously, we have revised the report to reflect the goals contained in WHD’s draft performance plan for fiscal year 2003. See GAO comment 10. 16. The fact that WHD conducts investigations in these industries does not negate the need for WHD to set goals for increasing compliance with the child labor provisions of FLSA for these industries. Because WHD conducts many of its investigations in response to complaints about possible violations, the resource allocations indicated in Labor’s charts may reflect WHD’s responses to complaints as much as the results of its efforts to target child labor violations in these industries. 17. We disagree that our analysis of fatality rates is a “flawed interpretation of the BLS and NIOSH data” and that is more appropriate to use the actual numbers of children employed, injured, and killed to determine risk. While we understand the importance of reviewing the number of children who work in each industry and the numbers of children injured or killed, we believe that injury and fatality rates are important indicators that WHD should use in allocating its resources. As noted in an article by a researcher at BLS about fatality rates, “although counts are informative in identifying worker groups that experience large numbers of fatalities, they do not by themselves measure risk. To quantify risk, the data on workplace fatalities must be associated with a measure of worker exposure to risk, such as employment or hours worked. The number of hours worked is preferable because different workers spend variable hours on the job in a given time period (e.g. year), and therefore have different lengths of exposure to workplace hazards.” We believe it is particularly important to use the number of hours worked, rather than simply using the number of children employed, to calculate injury and fatality rates because so many children (87 percent) work part time. 18. For the transportation and public utilities industry, Labor mentions only fatality rates and the number of fatalities. In addition to high fatality rates for this industry, we found that the injury rate was high compared to other industries. Therefore, we believe that it is important for WHD to focus some of its resources on reviewing child labor in this industry. 19. It is not clear that establishing a goal for improving compliance in the transportation and public utilities industry would be unrealistic and counterproductive. Given the limitations of the estimates of illegal employment, we question the agency’s estimate that only 4,400 youth are employed illegally in this industry. The statistics cited by Labor on the number of violations found as evidence that violations are not prevalent in this industry are also questionable because it is unclear whether WHD would have found more child labor violations if it had targeted employers in this industry. 20. Labor cites only the fatality rates and number of deaths in the wholesale trade industry, although we found that the injury rates for children working in this industry were also relatively high. While the number of children working in this industry may be small, we believe the injury rates to be sufficiently high to warrant further WHD attention. WHD should look to the extensive experience of its staff cited on page 10 of Labor’s comments to identify the areas of the country in which children are most likely to work in these industries. 21. As with the transportation and public utilities and the wholesale trade industries, Labor only cites the fatality rates for the construction industry. Again, we noted that the injury rate for children who work in this industry was relatively high. In addition, we question how WHD determined that “many of these specialty trade contractors are so small that they do not meet the FLSA coverage criteria” because this information is not tracked in WHD’s investigations database and we found no data on employer coverage under FLSA. We also found, in our discussions with WHD officials, that establishing coverage for employers in the construction industry was not a problem for some of the district offices we visited, particularly those on the east coast. 22. CPS data on occupations show that many children are employed as truck drivers. However, our draft report incorrectly indicated that 43 percent of all children who are employed illegally are employed as truck drivers. We clarified the report to show that the percentage of children who are employed illegally as truck drivers represents 42 percent of children who are employed in prohibited occupations. 23. We continue to believe that the injury and fatality rates for children who work in the construction industry warrant WHD attention. We also commend WHD’s efforts to fund research by NIOSH on the health and safety of young workers in the industry. 24. As noted in our report, manufacturing has one of the highest injury rates for children. WHD’s opinion that targeting additional resources on this industry would not be a wise resource investment and would have no impact on occupational injuries and fatalities for children working in this industry does not factor in these high injury and fatality rates. In fact, some of the most serious child labor violations WHD found occurred in sawmills and companies that produce wood pallets. 25. We commend WHD’s recent decision to conduct a second compliance survey of the grocery, full service restaurant, and fast-food restaurant industries in fiscal year 2004. However, as stated in our report, WHD has not developed methods of measuring the success of all of its child labor compliance efforts, particularly its education and outreach and other compliance assistance activities. In addition, WHD has not conducted child labor compliance surveys of other industries. 26. In our conversations with WHD headquarters officials and in WHD publications, the number of investigations, violations found, and civil monetary penalties were frequently cited as indicators of the success of the WHD’s child labor enforcement efforts. In fact, on page 3 of the agency’s comments on our report, Labor cites the increase from fiscal years 2000 to 2001 in the number of child labor violations and the total amount of penalties assessed as evidence of “WHD’s continued strong commitment of resources to child labor compliance.” 27. We disagree with Labor’s statement that WHD routinely uses BLS and NIOSH data to plan the allocation of its child labor resources. We found that WHD has no procedures for routinely obtaining data from BLS and NIOSH to plan its child labor enforcement efforts and the district offices we visited confirmed that they do not receive this information for use in planning their local child labor efforts. Second, Labor’s statement that WHD routinely uses state workers’ compensation data to plan its enforcement efforts must be qualified to indicate that many states refuse to release this information to WHD because of privacy concerns. Third, WHD has, in the past, used only limited data from its WHISARD investigations database to plan its enforcement efforts. In some cases, reports we requested from the database, such as reports that showed the source of investigations completed in fiscal years 2000 and 2001, was the first time that WHD had run such reports. Finally, we believe the anecdotal information on injuries and fatalities maintained by WHD’s national office does not provide enough information to be useful in making WHD’s resource allocation decisions. The information on fatalities and serious injuries to children tracked by WHD’s national office and provided to us in May 2002 contained data on only 7 fatalities and 36 injuries reviewed by WHD in fiscal year 2001 and 7 fatalities and 32 injuries reviewed in fiscal year 2002. 28. We agree with Labor’s statement that our recommendation to consider enhancing the data collected on children who work by expanding CPS to include 14-year-olds or beginning additional cohorts of NLSY cannot be implemented without additional study and resources, which is why we recommended that the Secretary “consider” these actions instead of recommending that they be taken. We clarified our conclusions to acknowledge that collecting these data may require additional resources. 29. We continue to believe that WHD’s regional and district offices cannot target their resources in the most effective manner without having better criteria from WHD’s national office on how to target their enforcement efforts, including information from WHD’s investigations database on previous child labor investigations and from BLS and NIOSH on the industries in which children work and those in which they are most likely to be injured or killed. Other major contributors to this report are Wendy Ahmed, Amy E. Buck, Beverly A. Crawford, Charla R. Gilbert, Julian P. Klazkin, Ellen L. Soltow, and Corinna A. Nicolaou.
In 2001, almost 40 percent of all 16- and 17-year-olds in the United States and many 14- and 15-year-olds worked at some time in the year. Children in the United States are often encouraged to work, and many people believe that children benefit from early work experiences by developing independence, confidence, and responsibility. However, the public also wants to ensure that the work experiences of young people enhance, rather than harm, their future opportunities. The number and characteristics of working children have changed little over the past decade. According to Bureau of Labor Statistics data, as in 1990, as many as 3.7 million children aged 15 to 17 worked in 2001. The number of children who die each year from work-related injuries has changed little since 1992, but the number of children who incurred nonfatal injuries while working is more difficult to determine because data from different sources provide different estimates of the number of injuries and trends over time. The Department of Labor devotes many resources to ensuring compliance with the child labor provisions of the Fair Labor Standards Act, including conducting nationwide campaigns designed to increase public awareness of the provisions, but its compliance efforts suffer from limitations that may prevent adequate enforcement of the law.
In preparation for the 2010 Census, the address canvassing operation was tested as part of the 2008 Dress Rehearsal. From May 7 to June 25, 2007, the Bureau conducted its address canvassing operation for its 2008 Dress Rehearsal in selected localities in California (see fig. 1) and North Carolina (see fig. 2). The 2008 Census Dress Rehearsal took place in San Joaquin County, California, and nine counties in the Fayetteville, North Carolina, area. According to the Bureau, the dress rehearsal sites provided a comprehensive environment for demonstrating and refining planned 2010 Census operations and activities, such as the use of HHCs equipped with Global Positioning System (GPS). Prior to Census Day, Bureau listers perform the address canvassing operation, during which they verify the addresses of all housing units. Address canvassing is a field operation to help build a complete and accurate address list. The Bureau’s Master Address File (MAF) is intended to be a complete and current list of all addresses and locations where people live or potentially live. The Topographically Integrated Geographic Encoding and Referencing (TIGER®) database is a mapping system that identifies all visible geographic features, such as type and location of streets, housing units, rivers, and railroads. Consequently, MAF/TIGER® provides a complete and accurate address list (the cornerstone of a successful census) because it identifies all living quarters that are to receive a census questionnaire and serves as the control mechanism for following up with households that do not respond. If the address list is inaccurate, people can be missed, counted more than once, or included in the wrong location(s). Generally, during address canvassing, census listers go door to door verifying and correcting addresses for all households and street features contained on decennial maps. The address listers add to the 2010 Census address list any additional addresses they find and make other needed corrections to the 2010 Census address list and maps using GPS-equipped HHCs. Listers are instructed to compare what they discover on the ground to what is displayed on their HHC. As part of the 2004 and 2006 Census Tests, the Bureau produced a prototype of the HHC that would allow the Bureau to automate operations, and eliminate the need to print millions of paper questionnaires, address registers, and maps used by temporary listers to conduct address canvassing and non-response follow-up as well as to allow listers to electronically submit their time and expense information. The HHCs for these tests were off-the-shelf computers purchased and programmed by the Bureau. While the Bureau was largely testing the feasibility of using HHCs for collecting data, it encountered a number of technical problems. The following are some of the problems we observed during the 2004and 2006 tests: slowness and frequent lock-up, problems with slow or unsuccessful transmissions, and difficulty in linking a mapspot to addresses for multi-unit structures. For the 2008 Dress Rehearsal and the 2010 Census, the Bureau awarded the development of the hardware and software for a HHC to a contractor. In March 2006, the Bureau awarded a 5-year contract of $595,667,000 to support the FDCA project. The FDCA project includes the development of HHCs, and Bureau officials stated that the HHCs would ultimately increase the efficiency and reduce costs for the 2010 Census. According to the Director of the Census Bureau, the FDCA program was designed to supply the information technology infrastructure, support services, hardware, and software to support a network for almost 500 local offices and for HHCs that will be used across the country. He also indicated that FDCA can be thought of as being made up of three fundamental components: (1) automated data collection using handheld devices to conduct address canvassing, and to collect data during the non-response follow-up of those households that do not return the census form; (2) the Operations Control System (OCS) that tracks and manages decennial census workflow in the field; and (3) census operations infrastructure, which provides office automation and support for regional and local census offices. The 2008 Dress Rehearsal Address Canvassing operation marked the first time the contractor-built HHCs and the operations control system were used in the field. In 2006, we reported that not using the contractor-built HHCs until 2008 Dress Rehearsal Address Canvassing would leave little time to develop, test, and incorporate refinements to the HHCs in preparation for the 2010 Census. We also reported that because the Bureau-developed HHC had performance problems, the introduction of a new HHC added another level of risk to the success of the 2010 Census. For the 2008 Dress Rehearsal, the FDCA contractor developed the hardware and software used in census offices and on the HHCs. See figure 3 for more details. The HHC included several applications that varied depending on the role of the user: software enabling listers to complete their time and expense electronically; text messaging software enabling listers to communicate via text message; software enabling staff to review all work assigned to them and enabling crew leaders to make assignments; software enabling staff to perform address canvassing; and an instrument enabling quality control listers to perform quality assurance tasks. The dress rehearsal address canvassing started May 7, 2007, and ended June 25, 2007, as planned. The Bureau reported in its 2008 Census Dress Rehearsal Address Canvassing Assessment Report being able to use the HHC to collect address information for 98.7 percent of housing units visited and map information for 97.4 percent of the housing units visited. There were 630,334 records extracted from the Bureau’s address and mapping database and sent to the Bureau’s address canvassing operation and 574,606 valid records following the operation. Mapspots (mapping coordinates) were collected for each structure that the Bureau defined as a Housing Unit, Other Living Quarters, or Uninhabitable. Each single- family structure received its own mapspot, while multi-unit structures shared a single mapspot for all the living quarters within that structure. According to the Bureau’s 2008 Dress Rehearsal Address Canvassing Assessment Report, the address canvassing operation successfully collected GPS mapspot coordinates in the appropriate block for approximately 92 percent of valid structures; most of the remaining 8 percent of cases had a manual coordinate that was used as the mapspot. It is not clear whether this represents acceptable performance because the Bureau did not set thresholds as to what it expected during the address canvassing dress rehearsal. Listers experienced multiple problems using the HHCs. For example, we observed and the listers told us that they experienced slow and inconsistent data transmissions from the HHCs to the central data processing center. The listers reported the device was slow to process addresses that were a part of a large assignment area. Bureau staff reported similar problems with the HHCs in observation reports, help desk calls, and debriefing reports. In addition, our analysis of Bureau documentation revealed problems with the HHCs consistent with those we observed in the field: Bureau observation reports revealed that listers most frequently had problems with slow processing of addresses, large assignment areas, and transmission. The help desk call log revealed that listers most frequently reported issues with transmission, the device freezing, mapspotting, and large assignment areas. The Bureau’s debriefing reports illustrated the impact of the HHCs problems on address canvassing. For example, one participant commented that the listers struggled to find solutions to problems and wasted time in replacing the devices. Collectively, the observation reports, help desk calls, debriefing reports, and Motion and Time Study raised serious questions about the performance of the HHCs during the address canvassing operation. The Bureau’s 2008 Dress Rehearsal Address Canvassing Assessment Report cited several problems with HHCs. For example, the Bureau observed the following problems: substantial software delays for assignment areas with over 700 housing substantial software delays when linking mapspots at multi-unit unacceptable help desk response times and insufficient answers, which “severely” affected productivity in the field, and inconsistencies with the operations control system that made management of the operation less efficient and effective. The assessment reported 5,429 address records with completed field work were overwritten during the course of the dress rehearsal address canvassing operation, eliminating the information that had been entered in the field. The Bureau reported that this occurred due to an administrative error that assigned several HHCs the same identification number. Upon discovering the HHC mistake, the FDCA contractor took steps during the dress rehearsal address canvassing operation to ensure that all of the HHC devices deployed for the operation had unique identification numbers. Left uncorrected, this error could have more greatly affected the accuracy of the Bureau’s master address list during dress rehearsal. The HHCs are used in a mobile computing environment where they upload and download data from the data processing centers using a commercial mobile broadband network. The data processing centers housed telecommunications equipment and the central databases, which were used to communicate with the HHCs and manage the address canvassing operation. The HHCs download data, such as address files, from the data processing centers, and upload data, such as completed work and time and expense forms, to the data processing centers. The communications protocols used by the HHCs were similar to those used on cellular phones to browse Web pages on the Internet or to access electronic mail. For HHCs that were out of the coverage area of the commercial mobile broadband network or otherwise unable to connect to the network, a dial- up capability was available to transfer data to the data processing centers. FDCA contract officials attributed HHC transmission performance problems to this mobile computing environment, specifically: telecommunication and database problems that prevented the HHC from communicating with the data center, extraneous data being transmitted (such as column and row headings), an unnecessary step in the data transmission process. When problems with the HHC were identified during address canvassing, the contractor downloaded corrected software in five different instances over the 7-week period of the dress rehearsal address canvassing operation. After address canvassing, the Bureau established a review board and worked with its contractor to create task teams to address FDCA performance issues such as (1) transmission problems relating to the mobile computing environment, (2) the amount of data transmitted for large assignment areas, and (3) options for improving HHC performance. One factor that may have contributed to these performance problems was a compressed schedule that did not allow for thorough testing before the dress rehearsal. Given the tighter time frames going forward, testing and quickly remedying issues identified in these tests becomes even more important. Productivity results were mixed when Census listers used the HHC for address canvassing activities. A comparison of planned versus reported productivity reveals lister productivity exceeded the Bureau’s target by almost two housing units per hour in rural areas, but missed the target by almost two housing units per hour in urban/suburban areas. Further, the reported productivity for urban/suburban areas was more than 10 percent lower than the target, and this difference will have cost implications for the address canvassing operation. Table 1 shows planned and reported productivity data for urban/suburban and rural areas. While productivity results were mixed, the lower than expected productivity in urban/suburban areas represents a larger problem as urban/suburban areas contain more housing units—and therefore a larger workload. According to the Bureau’s dress rehearsal address canvassing assessment report, HHC problems appear to have negatively affected listers’ productivity. The Bureau’s assessment report concluded that “productivity of listers decreased because of the software problems.” However, the extent of the impact is difficult to measure, as are other factors that may have affected productivity. The effect of decreases in productivity can mean greater costs. The Bureau, in earlier cost estimates, assumed a productivity rate of 25.6 housing units per hour, exceeding both the expected and reported rates for the dress rehearsal. We previously reported that substituting the actual address canvassing productivity for the previously assumed 25.6 units per hour resulted in a $270 million increase in the existing life-cycle cost estimate. The Bureau has made some adjustments to its cost estimates to reflect its experience with the address canvassing dress rehearsal, but could do more to update its cost assumptions. We recommended the Bureau do so in our prior report. The Bureau took some steps to collect data, but did not fully evaluate the performance of the HHCs. For instance, the contractor provided the Bureau with data such as average transmission times collected from transmission logs on the HHC, as required in the contract. But the Bureau has not used these data to analyze the full range of transmission times, nor how this may have changed throughout the entire operation. Without this information, the magnitude of the handheld computers’ performance issues throughout dress rehearsal was not clear. Also, the Bureau had few benchmarks (the level of performance it is expected to attain) to help evaluate the performance of HHCs throughout the operation. For example, the Bureau has not developed an acceptable level of performance for total number of failed transmissions or average connection speed. Additionally, the contractor and the Bureau did not use the dashboard specified in the contract for dress rehearsal activities. Since the dress rehearsal, the Bureau has specified certain performance requirements that should be reported on a daily, weekly, monthly, and on an exception basis. In assessing an “in-house built” model of the HHC, we recommended in 2005 that the Bureau establish specific quantifiable measures in such areas as productivity that would allow it to determine whether the HHCs were operating at a level sufficient to help the Bureau achieve cost savings and productivity increases. Further, our work in the area of managing for results has found that federal agencies can use performance information, such as that described above, to make various types of management decisions to improve programs and results. For example, performance information can be used to identify problems in existing programs, identify the causes of problems, develop corrective actions, plan, identify priorities, and make resource allocation decisions. Managers can also use performance information to identify more effective approaches to program implementation. The Bureau had planned to collect certain information on operational aspects of HHC use, but did not specify how it would measure HHC performance. Specifically, sections of the FDCA contract require the HHCs to have a transmission log with what was transmitted, the date, time, user, destination, content/data type, and outcome status. In the weeks leading up to the January 16, 2008, requirements delivery, Bureau officials drafted a document titled “FDCA Performance Reporting Requirements,” which included an array of indicators such as average HHC transmission duration, total number of successful HHC transmissions, total number of failed HHC transmissions, and average HHC connection speed. Such measures may be helpful to the Bureau in evaluating its address canvassing operations. While these measures provide certain useful information, they only cover a few dimensions of performance. For example, to better understand transmission time performance, it is important to include analyses that provide information on the range of transmission times. The original FDCA contract also requires that the contractor provide near real-time reporting and monitoring of performance metrics on a “control panel/dashboard” application to visually report those metrics from any Internet-enabled PC. Such real-time reporting would help the Bureau and contractor identify problems during the operation, giving them the opportunity to quickly make corrections. However, the “control panel/dashboard” application was not used during the dress rehearsal. The Bureau explained that it needed to use the dress rehearsal to identify what data or analysis would be most useful to include on the dashboard it expects to use for address canvassing in 2009. In January and February 2008, the Bureau began to make progress in identifying the metrics that will be used in the dashboard. According to Bureau officials, the dashboard will include a subset of measures from the “FDCA Performance Reporting Requirements” such as average HHC transmission time and total number of successful and failed HHC transmissions, which would be reported on a daily basis. Between April 28, 2008, and May 1, 2008, the Bureau and its contractor outlined the proposed reporting requirements for the dashboard. The Bureau indicated that the dashboard will be tested during the systems testing phase, which is currently scheduled for November and December 2008. They did not specify if the dashboard will be used in the operational field test of address canvassing, which is the last chance for the Bureau to exercise the software applications under Census-like conditions. The dress rehearsal address canvassing study assessment plan outlines the data the Bureau planned to use in evaluating the use of the HHC, but these data do not allow the Bureau to completely evaluate the magnitude of performance problems. The plan calls for using data such as the number of HHCs shipped to local census offices, the number of defective HHCs, the number of HHCs broken during the dress rehearsal address canvassing operation, the number checked in at the end of the operation, whether deployment affected the ability of staff to complete assignments, software/hardware problems reported through the help desk, the amount of time listers lost due to hardware or software malfunctions, and problems with transmissions. The plan also called for the collection of functional performance data on the HHCs, such as the ability to collect mapspots. Despite reporting on the data outlined in the study plan, the Bureau’s evaluation does not appear to cover all relevant circumstances associated with the use of the HHC. For example, the Bureau does not measure when listers attempt transmissions but the mobile computing environment does not recognize the attempt. Additionally, the Bureau’s evaluation does not provide conclusive information about the total amount of downtime listers experienced when using the HHC. For example, in the Bureau’s final 2008 Census Dress Rehearsal Address Canvassing Assessment Report, the Bureau cites its Motion and Time Study as reporting observed lister time lost due to hardware or software malfunctions as 2.5 percent in the Fayetteville and 1.8 percent in the San Joaquin County dress rehearsal locations. The report also notes that the basis for these figures does not include either the downtime between the onset of an HHC error and the last/successful resolution attempt, nor does it include the amount of time a lister spent unable to work due to an HHC error. These times were excluded because they were not within the scope of the Motion and Time Study of address canvassing tasks. However, evaluating the full effect of HHC problems should entail accounting for the amount of time listers spend resolving HHC errors or are not engaged in address canvassing tasks due to HHC errors. Because of the performance problems observed with HHCs during the 2008 Dress Rehearsal, and the Bureau’s subsequent redesign decision to use the HHCs for the actual address canvassing operation, HHC use will have significant implications for the 2010 Address Canvassing operation. In his April 9, 2008, congressional testimony, the Bureau’s Director outlined next steps that included developing an integrated schedule for address canvassing and testing. On May 22, 2008, the Bureau issued this integrated schedule, which identifies activities that need to be accomplished for the decennial and milestones for completing tasks. However, the milestones for preparing for address canvassing are very tight and in one case overlap the onset of address canvassing. Specifically, the schedule indicates that the testing and integrating of HHCs will begin in December 2008 and be completed in late March 2009; however, the deployment of the HHCs for address canvassing will actually start in February 2009, before the completion of testing and integration. It is uncertain whether the testing and integration milestones will permit modification to technology or operations prior to the onset of operations. Separately, the Bureau on June 6, 2008, produced a testing plan for the address canvassing operation. This testing plan includes a limited operational field test of address canvassing; however, the plan does not specify that the dashboard described earlier will be used in this test. The address canvassing testing plan is a high-level plan that describes a partial redo of the dress rehearsal to validate certain functionality and represents a reasonable approach. However, it does not specify the basis for readiness of the FDCA solution for address canvassing and when and how this determination will occur—when the Bureau would say that the contractor’s solution meets its operational needs. Field staff reported problems with HHCs when working in large assignment areas during address canvassing. According to Bureau officials, the devices could not accommodate more than 720 addresses—3 percent of dress rehearsal assignment areas were larger than that. The amount of data transmitted and used slowed down the HHCs significantly. In a June 2008, congressional briefing, Bureau officials indicated once other HHC technology issues are resolved the number of addresses the HHCs can accommodate may increase or decrease from the current 720. Identification of these problems caused the contractor to create a task team to examine the issues, and this team recommended improving the end-to-end performance of the mobile solution by controlling the size of assignment area data delivered to the HHC for address canvassing. One specific recommendation was limiting the size of assignment areas to 200 total addresses. However, the redesign effort took another approach and decided that the Bureau will use laptops and software used in other demographic surveys to collect information in large blocks (assignment areas comprise one or more blocks). Specifically, the collection of information in large blocks (those with over 700 housing units) will be accomplished using existing systems and software known as the Demographic Area Address Listing (DAAL) and the Automated Listing and Mapping Instrument (ALMI). Prior to the start of the address canvassing operation, blocks known to have more than 700 housing units would be removed from the scope of the FDCA solution. These blocks will be flagged in the data delivered to the contractor and will not be included for the address canvassing operation. Because this plan creates dual-track operations, Bureau officials stated that differences exist in the content of the extracts and that they are currently working to identify the differences and determine how to handle those differences. Additionally, they said that plans for the testing of the large block solution are expected to occur throughout various phases of the testing for address canvassing and will include performance testing, interface testing, and field testing. The costs for a help desk that can support listers during address canvassing were underestimated during planning and have increased greatly. Originally, the costs for the help desk were estimated to be approximately $36 million, but current estimates have the cost of the help desk rising as high as $217 million. The increased costs are meant to increase the efficiency and responsiveness of the help desk so that listers do not experience the kind of delays in getting help that they did during the address canvassing dress rehearsal. For example, the Bureau’s final assessment of dress rehearsal address canvassing indicated that unacceptable help desk response times and insufficient answers severely affected productivity in the field. Field staff told us that help desk resources were unavailable on the weekends and that they had difficulty getting help. The increased costs cited above are due in part to improvements to the help desk, such as expanded availability and increased staffing. Lower than expected productivity has cost implications. In fact, the Bureau is beginning to recognize part of this expected cost increase. Specifically, the Bureau expects to update assumptions for the number of hours listers may work in a given week. The model assumes 27.5 hours per week, but the Bureau now expects this to be 18. This will make it necessary to hire more listers and, therefore, procure more HHCs. The Bureau adjusted its assumptions based on its experience in the dress rehearsal. Our related report recommends updating assumptions and cost estimates. The dress rehearsal represents a critical stage in preparing for the 2010 Census. This is the time when Congress and others should have the information they need to know how well the design for 2010 is likely to work, what risks remain, and how those risks will be mitigated. We have highlighted some of the risks facing the Bureau in preparing for its first major field operation of the 2010 Census—address canvassing. Going forward, it will be important for the Bureau to specify how it will ensure that this operation will be successfully carried out. If the solutions do not work in resolving HHC technology issues the Bureau will not achieve productivity targets, and decennial costs will continue to rise. Without specifying the basis and time frame for determination of readiness of the FDCA address canvassing solution, the Bureau will not have the needed assurance that the HHCs will meet its operational needs. Such testing is especially critical for changes to operations that were not part of the address canvassing dress rehearsal. For example, because data collection in large blocks will be conducted in parallel with the address canvassing operation, and the Bureau is currently working to identify the differences in the content of the resulting extracts, it is important that this dual-track be tested to ensure it will function as planned. Furthermore, without benchmarks defining successful performance of the technology, the Bureau and stakeholders will be less able to reliably assess how well the technology worked during address canvassing. Although the Bureau field tested the HHCs in its dress rehearsal last year, it did not then have in place a dashboard for monitoring field operations. The Bureau’s proposal for a limited field operations test this fall provides the last opportunity to use such a dashboard in census-like conditions. To be most effective, test results, assessments, and new plans need to be completed in a timely fashion, and they must be shared with those with oversight authority as soon as they are completed. To ensure that the Bureau addresses key challenges facing its implementation of the address canvassing operation for the 2010 Census, we recommend that the Secretary of Commerce direct the Bureau to take the following four actions: Specify the basis for determining the readiness of the FDCA solution for address canvassing and when and how this determination will occur— when the Bureau would say that the contractor’s solution meets its operational needs. Specify how data collection in large blocks will be conducted in parallel with the address canvassing operation, and how this dual-track will be tested in order to ensure it will function as planned. Specify the benchmarks for measures used to evaluate the HHC performance during address canvassing. Use the dashboard to monitor performance of the HHCs in the operational field test of address canvassing. The Secretary of Commerce provided written comments on a draft of this report on July 25, 2008. The comments are reprinted in appendix II. Commerce had no substantive disagreements with our conclusions and recommendations and cited actions it is taking to address challenges GAO identified. Commerce offered revised language for one recommendation, which we have accepted. Commerce also provided technical corrections, which we incorporated. Specifically, we revised our recommendation that the Bureau “Specify the basis for acceptance of the FDCA solution for address canvassing and when that acceptance will occur—when the Bureau would say it meets its operational needs and accepts it from the contractor” to “Specify the basis for determining the readiness of the FDCA solution for address canvassing and when and how this determination will occur—when the Bureau would say that the contractor’s solution meets its operational needs.” Also, after further discussion with Bureau officials, we provided more specific measures of address and map information successfully collected. We revised our discussion of the 2004 and 2006 census tests to make clear that the HHC prototype was only used for non-response follow-up in the 2004 test. Finally, we revised our language on their decision to contract the development of HHC hardware and software to address the Bureau’s concerns about how we characterized the timing of its decision. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to other interested congressional committees, the Secretary of Commerce, and the Director of the U.S. Census Bureau. Copies will be made available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions on matters discussed in this report, please contact Mathew J. Scirè at (202) 512-6806 or [email protected], or David A. Powner at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives for this report were to analyze U.S. Census Bureau (Bureau) and contractor data showing how handheld computers (HHC) operated and its implications on operations, and examine implications the redesign may have on plans for address canvassing in the 2010 Census. To determine how well the HHC worked in collecting and transmitting address and mapping data, and what data the Bureau and contractor used in assessing HHC performance during address canvassing, we examined Bureau documents, observed HHCs in use, and interviewed Bureau and contractor officials. For example, we reviewed Census Bureau memos that outline the data on HHC performance the Bureau planned to collect. We reviewed the Field Data Collection Automation (FDCA) contract, focusing specifically on what performance specifications and requirements were included in the contract. We observed HHC use during dress rehearsal address canvassing, and interviewed Bureau officials and contractor officials about HHC use and performance during the dress rehearsal of address canvassing. Specifically, we observed five different listers over the course of 2 days in the Fayetteville, North Carolina, dress rehearsal site and six different listers over 3 days in the San Joaquin County, California, dress rehearsal site. We also analyzed data on HHC use including data on HHC functionality/usability, HHC log data, the Bureau’s Motion and Time Study, the Bureau’s 2008 Dress Rehearsal assessments, observational and debriefing reports, a log of help desk tickets, and lessons-learned documents. Additionally, we interviewed knowledgeable Bureau and contractor officials. We did not independently verify the accuracy and completeness of the data either input into or produced by the operation of the HHCs. To better understand how HHC performance affected worker productivity, we attended the dress rehearsal address canvassing training for listers, interviewed Bureau officials about HHC performance, and examined data provided in the Bureau’s Motion and Time Study and other sources related to predicted and reported productivity. In addition, we identified and analyzed the factors that contribute to HHC performance on aspects of address canvassing productivity. We examined the Bureau’s Motion and Time Study results, conducted checks for internal consistency within the reported results, and met with Bureau officials to obtain additional information about the methodology used. The results reported in the study are estimates based on a non-random sample of field staff observed over the course of the address canvassing operation. Within the context of developing estimates for the time it takes address listers to perform address canvassing tasks and successfully resolve certain HHC problems, we determined that these data were sufficiently reliable for the purposes of our analysis. However, the study’s methodology did not encompass a full accounting of the time field staff spent on the job, nor did the report explain how some results attributed to the Motion and Time Study were derived. We also compared the Bureau’s expected productivity rates to productivity rates reported to us by the Bureau in response to our request for actual productivity data from the 2008 Dress Rehearsal Addressing Canvassing operation. After analyzing the Bureau’s productivity data, we requested information about how the productivity data figures were calculated in order to assess their reliability. In reviewing documentation on the methodology and data, we identified issues that raise concerns. The Bureau acknowledged that data for all address field staff were not included in its analysis. Even though the productivity figures reported to us and presented in this report are generally in line with the range of productivity figures shown in the Bureau’s Motion and Time Study, the missing data, along with the Bureau’s lack of response to some of our questions about calculations of productivity figures, limit the reliability of these data. We determined that they are adequate for purposes of this report in that they provide a rough estimate of field worker productivity, but are not sufficiently reliable to be characterized as definitive representation of the actual productivity experienced in the 2008 Dress Rehearsal Address Canvassing operation. To ascertain the implications the redesign may have on plans for address canvassing in the 2010 Census, we observed meetings with officials of the Bureau, Commerce, Office of Management and Budget, and the contractor who were working on the FDCA redesign at Bureau headquarters. We also met with the Director of the Census Bureau and analyzed key Department of Commerce, Bureau, and contractor documents including the 2010 Census Risk Reduction Task Force Report and a program update provided by the contractor (as well as new and clarified requirements). The Bureau is in the process of revising some of its plans for conducting address canvassing and had not finalized those plans prior to the completion of this audit. We conducted this performance audit from April 2007 to July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact names above, Assistant Director Signora May, Stephen Ander, Thomas Beall, Jeffrey DeMarco, Richard Hung, Barbara Lancaster, Andrea Levine, Amanda Miller, Niti Tandon, Lisa Pearson, Cynthia Scott, Timothy Wexler, and Katherine Wulff made key contributions to this report.
The U.S. Census Bureau (Bureau) had planned to rely heavily on automation in conducting the 2010 Census, including using handheld computers (HHC) to verify addresses. Citing concerns about escalating costs, in March 2008 the Secretary of Commerce announced a redesign of the key automation effort. GAO was asked to (1) analyze Bureau and contractor data showing how HHCs operated and their impact on operations, and (2) examine implications the redesign may have on plans for address canvassing in the 2010 Census. GAO reviewed Bureau and contractor data, evaluations, and other documents on HHC performance and staff productivity; interviewed Bureau and contractor officials; and visited the two dress rehearsal sites to observe and document the use of the HHCs in the field. Census and contractor data highlight problems field staff (listers) experienced using HHCs during the address canvassing dress rehearsal operation in 2007. Help desk logs, for example, revealed that listers most frequently reported issues with transmission, the device freezing, mapspotting (collecting mapping coordinates), and difficulties working with large blocks. When problems were identified, the contractor downloaded corrected software to the HHCs. Nonetheless, help desk resources were inadequate. The Bureau acknowledged that issues with the use of technology affected field staff productivity. After address canvassing, the Bureau established a review board and worked with its contractor to create task teams to analyze and address Field Data Collection Automation (FDCA) performance issues. Although the Bureau recognized that technology issues affected operations, and the contractor produced data on average transmission times, the Bureau and its contractor did not fully assess the magnitude of key measures of HHC performance. GAO previously recommended the Bureau establish specific quantifiable measures in such areas as productivity and performance. Also, the FDCA contract calls for the contractor to provide near real-time monitoring of performance metrics through a "dashboard" application. This application was not used during the census dress rehearsal. The Bureau has developed a preliminary list of metrics to be included in the dashboard such as daily measures on average transmission duration and number of failed transmissions, but has few benchmarks for expected performance. For example, the Bureau has not developed an acceptable level of performance on total number of failed transmissions or average connection speed. Technology issues and the Bureau's efforts to redesign FDCA have significant implications for address canvassing. Among these are ensuring that FDCA solutions for technical issues identified in the dress rehearsal are tested, the help desk adequately supports field staff, and a solution for conducting address canvassing in large blocks is tested. In June 2008, the Bureau developed a testing plan that includes a limited operational field test, but the plan does not specify the basis for determining the readiness of the FDCA solution for address canvassing and when and how this determination will occur.
From fiscal years 2002 through 2007, several changes occurred that affected refuge management including changes in funding and staffing levels, refuge system policy initiatives, and the influence of external factors, such as extreme weather and human development. Fluctuations in refuge funding. Inflation-adjusted funding (in 2002 dollars) for core refuge system activities—measured as obligations for refuge operations, maintenance, and fire management—peaked in fiscal year 2003, for the celebration of the refuge system’s centennial, at about $391 million—6.8 percent above fiscal year 2002 levels—and then declined quickly to 4.7 percent below peak levels by fiscal year 2005, before increasing again to 2.3 percent below peak levels in fiscal year 2007; it ended 4.3 percent above fiscal year 2002 levels. In nominal dollars, core funding increased each year over the time period from about $366 million in fiscal year 2002 to about $468 million in fiscal year 2007. At the refuge level, inflation-adjusted core funding at refuges varied considerably during the time period, with about as many losing funding as gaining funding since fiscal year 2002. Specifically, from fiscal year 2002 through fiscal year 2007, core inflation-adjusted funding decreased for 96 of 222 complexes and stand-alone refuges and increased for 92, with funding remaining about the same for 34. The magnitude of the changes in core funding at the refuge level were also more pronounced than for the trend overall. Specifically, core funding for 39 complexes and stand-alone refuges decreased by more than 25 percent during this time period. Fluctuations in staffing levels. Staffing levels for core refuge activities (core staffing), as measured by full-time equivalents (FTE) the refuge system actually used, peaked one year later than core inflation-adjusted funding and then declined more slowly. Specifically, core staffing, which includes operations, maintenance, and fire management, peaked in fiscal year 2004 at a level 10.0 percent higher than in fiscal year 2002, but declined after that to 4.0 percent below peak staffing levels in fiscal year 2007. This level, however, was still 5.5 percent higher than the staffing level in fiscal year 2002. While operations and maintenance FTEs increased 3.6 percent overall during our study period, they ended the period down 6.9 percent from their 2004 peak. Fire management FTEs, on the other hand, increased 14.3 percent over fiscal year 2002 levels. Similar to FTEs, the number of employees on board in refuge system positions also declined after peaking in fiscal year 2004. Through fiscal year 2007, nearly 375 employees were lost from the refuge system’s peak staffing levels, a reduction of 8.4 percent over this period. About three- quarters of this loss came through a reduction in permanent employees (a 7.5 percent reduction), which refuge managers and regional and headquarters officials told us are a key measure of the effective strength of the workforce available to conduct core refuge activities because they represent employees on board indefinitely. Though 38 complexes and stand-alone refuges increased their permanent staff by more than 5 percent since 2004, more than three times as many lost at least 5 percent. Figure 1 compares the trends in the refuge system’s core funding, staffing, and permanent employee levels during our study period. New policy initiatives. Several new refuge system policy initiatives were implemented during this period: Recognizing that funding declines after 2003 were exacerbating an already high proportion of staff costs in refuge budgets, regional offices began to (1) reduce staff positions through attrition and by further consolidating some stand-alone refuges into complexes, and (2) categorize refuges into three tiers for the purpose of prioritizing funding and staffing allocations among refuges. These measures are primarily responsible for the decline in FTEs and permanent employees from fiscal year 2004 peak levels and the shifts in staffing among complexes and stand-alone refuges. Recognizing that the refuge system was not on pace to meet a mandate in the National Wildlife Refuge System Improvement Act of 1997 to complete conservation plans for each refuge by 2012, refuge system officials created a completion schedule and, beginning in 2004, began requiring staff at refuges to turn their attention to completing the plans. While refuge officials believe that they can meet the deadline, current information shows that some plans are behind schedule. To help spread visitor service funds across as many refuges as possible, refuge officials began placing a greater emphasis on constructing smaller visitor facility structures, such as informational kiosks and restrooms, at a larger number of refuges rather than constructing a smaller number of traditional visitor centers. To improve safety and address other concerns, refuge system management began an initiative to increase the number of full-time law enforcement officers and their associated training and experience requirements. However, refuge officials told us that they need to hire about 200 additional officers in order to reach the minimum number needed to provide adequate protection to refuge resources and visitors. Various refuge system, FWS, and Interior policies increased requirements on nonadministrative staff to enter additional data into certain systems and respond to numerous data calls. Refuge system officials are beginning to implement changes to reduce some of these administrative burdens. Increasing external factors. The influence of external factors––those outside the control of the refuge system that complicate refuges’ abilities to protect and restore habitat quality, including extreme weather and development on adjacent lands––increased over this period. For example, refuge managers reported that between fiscal years 2002 and 2007, the influence of development—such as the expansion of urban areas and the conversion of off-refuge land near refuges to agriculture or industrial use—increased around refuges and contributed to refuge habitat problems for almost one half of the refuges. Such development can pollute refuge lands and waters and make it more difficult to maintain viable, interconnected habitat in and around a refuge’s borders. From fiscal years 2002 through 2007, several changes occurred in refuges’ habitat management and visitor services, creating concerns about the refuges’ abilities to maintain high quality habitat and visitor services in the future. Habitat management. Habitats on refuges for five types of key species— waterfowl, other migratory birds, threatened and endangered species, candidate threatened and endangered species, and state species of concern—improved between fiscal years 2002 and 2007—about two times as often as they worsened (see table 1). Refuge managers reported two to nearly seven times as often that habitats for several types of key species were of high quality than low quality in 2007 (see table 2). Habitat quality is determined by the availability of several key components, including fresh water, food sources, and nesting cover, among other things, and the absence of habitat problems, such as invasive species. High quality habitat generally provides adequate amounts of each of these main habitat components and is not significantly affected by habitat problems, while low quality habitat generally lacks these components and may have significant problems; moderate quality habitat has a mixture of these attributes. Complicating habitat management is growing pressure from increasing habitat problems occurring on refuges and the influence of external factors. Our survey found that invasive plant species and habitat fragmentation––the disruption of natural habitat corridors, often caused by human development activities––were the leading problems, affecting 55 percent and 44 percent of refuges, respectively, and both were increasing on more than half of refuges. Managers at refuges close to urban centers showed us busy roads adjacent to their refuge that have cut off natural habitat corridors, leading to animals trying to cross them or cutting them off from other members of their species, leading to genetic homogeneity and inbreeding. Managers of more rural refuges talked about increasing pressures to convert lands to agricultural uses, citing factors such as the increasing price of corn, or to industrial uses, such as oil and gas development. At the same time, refuge managers reported increasing the time spent on a number of key habitat management activities on many refuges between fiscal years 2002 and 2007 (see table 3). Importantly, time spent on developing comprehensive conservation plans, which are required by the Improvement Act, increased for 59 percent of refuges during our study period. In addition, refuges that increased the time spent on habitat management activities were about three times more likely to report that habitat quality for waterfowl and other migratory birds improved rather than worsened. In light of increasing problems and threats affecting refuge conditions, as well as recent funding and staffing constraints, refuge managers and regional and headquarters officials expressed concern about refuges’ abilities to sustain or improve current habitat conditions for wildlife into the future. Even though our survey showed that a large number of refuges increased staff time on habitat management activities, some refuge managers we interviewed explained that staff were simply working longer hours to get the work done. Several refuge managers repeatedly indicated that despite growing habitat problems, an increasing administrative workload, and reduced staffing, they are still trying to do everything possible to maintain adequate habitat, especially habitats for key species, such as waterfowl, other migratory birds, and threatened and endangered species. Several managers said that attention to key habitats is the last thing that will stop receiving management attention in the event of declining funding. Several managers even said that they have to limit the amount of time staff spend at the refuge, as these employees are working overtime without extra pay. Visitor services. Our survey found that the quality of all six wildlife- dependent visitor services was stable or improving between fiscal years 2002 and 2007, according to the vast majority of refuge managers responding to our survey. Most notably, environmental education and interpretation programs showed the largest percentage of refuges reporting improvement, although these programs also showed the largest percentage reporting declines as well, as compared to other visitor services (see table 4). Our survey found that four of the six key visitor services provided to the public were of moderate or better quality at most refuges in 2007, but environmental education and interpretation were reported to be low quality at about one-third of refuges (see table 5). Managers told us that education and interpretation are among the most resource intensive visitor service programs and, for these reasons, the programs are often among the first areas to be cut when a refuge faces competing demands. A major factor influencing the quality of visitor services—beyond the abundance of fish and wildlife populations—is the amount and quality of refuge infrastructure and the availability of supplies. For example, the availability of trails and tour routes is essential to providing the public with access to what refuges have to offer and is generally important for supporting any type of visitor service activity. Hunting and fishing infrastructure depend largely on physical structures such as duck blinds, boat launches, and fishing platforms. Providing wildlife observation and photography opportunities simply require adequate access to the refuge, but can be enhanced through observation platforms and photography blinds. Environmental education depends on physical infrastructure, such as classrooms, and supplies, such as workbooks, handouts, and microscopes. Environmental interpretation also depends on physical infrastructure such as informational kiosks and interpretive signs along trails. Some refuges reported that they expanded their visitor services infrastructure between fiscal years 2002 and 2007, for example, by adding informational kiosks and trails and tour routes, yet more than one-half of refuges reported no change (see table 6). Most refuges also reported that the quality of their visitor services infrastructure stayed about the same or increased since 2002. Time spent by refuges on visitor services varied considerably throughout the system. Overall, at least one in five refuges reported a decrease in staff time for each visitor service area (see table 7). Refuge managers indicated that staffing changes and a lack of resources for increasing and maintaining infrastructure, raise concerns about their ability to provide quality visitor services into the future. Our survey results showed that the time spent by permanent staff on visitor services had been reduced at more than one–third of refuges and more than half of refuge managers reported increasing their reliance on volunteers to help manage visitor centers and deliver education programs, for example. Refuge managers are also concerned about the impact that the increasing administrative workload incurred by non-administrative refuge staff is having on the refuges’ ability to deliver visitor services. Refuge managers and regional and headquarters officials expressed concern about the long- term implications of declining and low quality visitor services. Many refuge managers cited the importance of ensuring that the public has positive outdoor experiences on refuges and providing them with meaningful educational and interpretative services. Managers said that the availability of visitor services is a way to get young people interested in future careers with the refuge system and instill in children an appreciation for wildlife and the outdoors as well as an interest in maintaining these resources. In addition, visitor services are important for developing and maintaining community relationships, as the refuge system is increasingly turning toward partnerships with private landowners and other agencies and organizations to maintain and improve ecosystems both on and around wildlife refuges. In conclusion, maintaining the refuge system as envisioned in law—where the biological integrity, diversity and environmental health of the refuge system are maintained; priority visitor services are provided; and the strategic growth of the system is continued—may be difficult in light of continuing federal fiscal constraints and an ever-expanding list of challenges facing refuges. While some refuges have high quality habitat and visitor service programs and others have seen improvements since 2002, refuge managers are concerned about their ability to sustain high quality refuge conditions and continue to improve conditions where needed because of expected continuing increases in external threats and habitat problems affecting refuges. Already, FWS has had to make trade- offs among refuges with regard to which habitats will be monitored and maintained, which visitor services will be offered, and which refuges will receive adequate law enforcement coverage. FWS’s efforts to prioritize its use of funding and staff through workforce planning have restored some balance between refuge budgets and their associated staff costs. However, if threats and problems afflicting refuges continue to grow as expected, it will be important for the refuge system to monitor how these shifts in resources are affecting refuge conditions. Madam Chair, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Trish McClure, Assistant Director; Mark Braza; David Brown; Stephen Cleary; Timothy J. Guinane; Carol Henn; Richard Johnson; Michael Krafve; Alison O’Neill; George Quinn, Jr.; and Stephanie Toby made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Wildlife Refuge System, which is administered by the Fish and Wildlife Service in the Department of the Interior, comprises 585 refuges on more than 96 million acres of land and water that preserve habitat for waterfowl and other migratory birds, threatened and endangered species, and other wildlife. Refuges also provide wildlife-related activities such as hunting and fishing to about 40 million visitors every year. GAO was asked to testify on a report that is being released today, Wildlife Refuges: Changes in Funding, Staffing, and Other Factors Create Concerns about Future Sustainability (GAO-08-797), which (1) describes changing factors that the refuge system experienced from fiscal years 2002 through 2007, including funding and staffing changes, and (2) examines how habitat management and visitor services changed during this period. For this report, GAO surveyed all refuges, visited 19 refuges in four regions, and interviewed refuge, regional, and national officials. In its September 2008 report, GAO reports that for fiscal years 2002 through 2007, the refuge system experienced funding and staffing fluctuations, the introduction of several new policy initiatives, and the increased influence of external factors such as extreme weather that threaten wildlife habitat and visitor infrastructure. Although core funding--measured as obligations for refuge operations, maintenance, and fire management--increased each year, inflation-adjusted core funding peaked in fiscal year 2003 at about $391 million--6.8 percent above fiscal year 2002 funding. Inflation-adjusted core funding ended the period 2.3 percent below peak levels, but 4.3 percent above fiscal year 2002 levels by fiscal year 2007. Core refuge staffing levels peaked in fiscal year 2004 at 3,610 full-time equivalents--10.0 percent above the fiscal year 2002 level--and then declined more slowly than funding. By fiscal year 2007, staffing levels fell to 4.0 percent below peak levels, but 5.5 percent above fiscal year 2002 levels. Through fiscal year 2007, the number of permanent employees utilized by the refuge system declined to 7.5 percent below peak levels. During this period, refuge system officials initiated new policies that: (1) reduced staff positions and reconsidered how they allocate funds and staff among refuges in order to better align staff levels with funding, (2) required refuge staff to focus on a legislative mandate to complete refuge conservation plans by 2012, (3) shifted to constructing a larger number of smaller visitor structures, such as informational kiosks, and fewer large visitor centers to spread visitor service funds across more refuges, (4) increased the number of full-time law enforcement officers and their associated training requirements, and (5) resulted in additional administrative work. During this period, external factors, such as severe storms, that complicate refuge staffs' ability to protect and restore habitat quality also increased. GAO's survey of refuge managers showed that changes in habitat management and visitor service programs varied across refuges during the study period. Habitat conditions for key types of species improved about two times more often than they worsened, but between 7 and 20 percent of habitats were of poor quality in 2007. Certain habitat problems increased at more than half of refuges during thisperiod, and managers reported that they increased the time spent on certain habitat management activities, such as addressing invasive plants, despite declining staffing levels. However, several managers GAO interviewed said that staff were working longer hours without extra pay to get work done, and managers expressed concern about their ability to sustain habitat conditions. While the quality of four key visitor service programs was reported to be stable or improving between fiscal years 2002 and 2007 at the vast majority of refuges, the other two key programs--environmental education and interpretation--were considered poor quality at one-third of refuges in 2007. Changes in the time spent on visitor services varied considerably across refuges, and managers noted that visitor services are generally cut before habitat management activities when resources are limited. Managers are concerned about their ability to provide high quality visitor services in the future given staffing and funding constraints.
The amount of insurance coverage available to homeowners under the NFIP is limited by requirements set forth in statute and regulation. As a result of these limitations, insurance payments to claimants for flood damage may not cover all the costs of repairing or replacing flood- damaged property. For example, there is a $250,000 statutory ceiling on the amount of flood insurance homeowners can purchase for the building structure and a $100,000 ceiling on the amount they can purchase for certain personal property. Thus, homes that might sustain more than $250,000 in damage cannot be insured to their full replacement cost. In addition, to the statutory limitations on coverage amounts, Congress also gave FEMA broad authority to issue regulations establishing “the general terms and conditions of insurability,” including the classes, types, and locations of properties that are eligible for flood insurance; the nature and limits of loss that may be covered, the classification, limitation, and rejection of any risks that FEMA considers advisable; and the amount of appropriate loss deductibles. Pursuant to this delegation of authority, FEMA has issued regulations, including a “Standard Flood Insurance Policy,” that further delineate the scope of coverage. All flood insurance made available under the NFIP is subject to the express terms and conditions of the statute and regulations, including the standard policy. The Federal Insurance Administrator within FEMA is charged with interpreting the scope of coverage under the standard policy. In addition, NFIP policies cover only direct physical loss by or from flood. Therefore, losses resulting primarily from a preexisting structural weakness in a home or prior water damage, and losses resulting from events other than flood, such as windstorms or or earth movements, are not covered by the NFIP. Personal property is covered, with certain limitations, only if the homeowner has purchased separate NFIP personal property insurance in addition to coverage for the building. Finally, the method of settling losses affects the amount recovered. For example, homes that qualify only for an actual cash value settlement—which represents the cost to replace damages property, less the value of physical depreciation—would presumably receive payments that are less than homes that qualify for a replacement cost settlement, which does not deduct for depreciation. Finally, the amount recoverable under the SFIP is limited to the amount that exceeds the applicable deductible. Our report discusses the limitations on coverage and recoverable losses in greater detail. About 40 FEMA employees, assisted by about 170 contractor employees, are responsible for managing the NFIP. Management responsibilities include establishing and updating NFIP regulations, administering the National Flood Insurance Fund, analyzing data to actuarially determine flood insurance rates and premiums, and offering training to insurance agents and adjusters. In addition, FEMA and its program contractor are responsible for monitoring and overseeing the quality of the performance of the write-your-own companies to assure that the NFIP is administered properly. To meet its monitoring and oversight responsibilities, FEMA is to conduct periodic operational reviews of the 95 private insurance companies that participate in the NFIP. In addition, FEMA’s program contractor is to check the accuracy of claims settlements by doing quality assurance reinspections of a sample of claims adjustments for every flood event. For operational reviews, FEMA examiners are to do a thorough review of the companies’ NFIP underwriting and claims settlement processes and internal controls, including checking a sample of claims and underwriting files to determine, for example, whether a violation of policy has occurred, an incorrect payment has been made, and if files contain all required documentation. Separately, FEMA’s program contractor is responsible for conducting quality assurance reinspections of a sample of claims adjustments for specific flood events in order to identify, for example, whether an insurer allowed an uncovered expense, or missed a covered expense in the original adjustment. Operational reviews of flood insurance companies participating in the NFIP that are conducted by FEMA staff are FEMA’s primary internal control mechanism for monitoring, identifying, and resolving problems related to how insurers sell and review NFIP policies and adjust claims. For all aspects of operational reviews, the examiners are to determine whether files are maintained in good order, whether current forms are used and whether staff has a proficient knowledge of requirements and procedures to properly underwrite and process flood claims. Examiners are also to look at internal controls in place at each company. When problems are identified, examiners are to classify the severity of the errors. Each file reviewed is to be classified as satisfactory or unsatisfactory. Unsatisfactory files contain either a critical error (e.g., a violation of policy or an incorrect payment) or three non-critical errors (e.g., violations of procedures that did not delay actions or claims). Write-your-own companies with error rates of 20 percent or higher of the total number of files reviewed for the specific underwriting or claims operation review would always receive an unsatisfactory designation. In such cases, FEMA requires that the company develop an action plan to correct the problems identified and is to schedule a follow-up review in 6 months to determine whether progress has been made. The operational reviews and follow-up visits to insurance companies that we analyzed during 2005 followed FEMA’s internal control procedures for identifying and resolving specific problems that may occur in individual insurance companies’ processes for selling and renewing NFIP policies and adjusting claims. According to information provided by FEMA, the number of operational reviews completed between 2000 and August 2005 were done at a pace that allows for a review of each participating insurance company at least once every 3 years, as FEMA procedures require. In addition, the processes FEMA had in place for operational reviews and quality assurance reinspections of claims adjustments met our internal control standard for monitoring federal programs. In addition to operational reviews done by FEMA staff, FEMA’s program contractor conducts quality assurance reinspections of claims for specific flood events. The program contractor employs nine general adjusters who conduct quality assurance reinspections of a sample of open claims for each flood event. Procedures for the general adjusters to follow are outlined in FEMA’s Write Your Own Financial Control Plan. According to the general adjusters we interviewed, in addition to preparing written reports of each reinspection, general adjusters discuss the results of the reinspections they perform with officials of write-your-own companies that process the claims. If a general adjuster determines that the insurance company allowed an expense that should not have been covered, the company is to reimburse the NFIP. Conversely, if a general adjuster finds that the private-sector adjuster missed a covered expenses in the original adjustment, the general adjuster is to take steps to provide additional payment to the policyholder. An instructor at an adjuster refresher training session, while observing that adjusters had performed very well overall during the 2004 hurricane season, cited several errors that he had identified in reinspections of claims, including improper room dimension measurements and improper allocation of costs caused by wind damage (covered by homeowners’ policies) versus costs caused by flood damage. In addition, the instructor identified as a problem poor communication with homeowners on the processes followed to inspect the homeowner’s property and settle the claim. Overall error rates for write-your-own companies are monitored. Procedures require additional monitoring, training, or other action if error rates exceed 3 percent. According to the general adjusters we interviewed and FEMA’s program contractor, qualify assurance reinspections are forwarded from general adjusters to the program contractor where results of reinspections are to be aggregated in a reinspection database as a method of providing for broad-based oversight of the NFIP as its services are delivered by the write-your-own companies, adjusting firms and independent flood adjusters. The process FEMA used to select a sample of claims files for operational reviews and the process its program contractor used to select a sample of adjustments for reinspections were not randomly chosen or statistically representative of all claims. We found that the selection processes used were, instead, based upon judgmental criteria including, among other items, the size and location of loss and complexity of claims. As a result of limitations in the sampling processes, FEMA cannot project the results of these monitoring and oversight activities to determine the overall accuracy of claims settled for specific flood events or assess the overall performance of insurance companies and their adjusters in fulfilling their responsibilities for the NFIP—actions necessary for FEMA to meet our internal control standard that it have reasonable assurance that program objectives are being achieved and that its operations are effective and efficient. To strengthen and improve FEMA’s monitoring and oversight of the NFIP, we are recommending in today’s report that FEMA use a methodologically valid approach for sampling files selected for operational reviews and quality assurance claims reinspections. As of September 2005, FEMA had not yet fully implemented provisions of the Flood Insurance Reform Act of 2004. Among other things, the act requires FEMA to provide policyholders a flood insurance claims handbook; to establish a regulatory appeals process for claimants; and to establish minimum education and training requirements for insurance agents who sell NFIP policies. The 6-month statutory deadline for implementing these changes was December 30, 2004. In September 2005, FEMA posted a flood insurance claims handbook on its Web site. The handbook contains information on anticipating, filing and appealing a claim through an informal appeals process, which FEMA intends to use pending the establishment of a regulatory appeals process. However, because the handbook does not contain information regarding the appeals process that FEMA is statutorily required to establish through regulation, it does not yet meet statutory requirements. With respect to this appeals process, FEMA has not stated how long rulemaking might take to establish the process by regulation, or how the process might work, such as filing requirements, time frames for considering appeals, and the composition of an appeals board. Therefore, it remains unclear how or when FEMA will establish the statutorily required appeals process. With respect to minimum training and education requirements for insurance agents who sell NFIP policies, FEMA published a Federal Register notice on September 1, 2005, which included an outline of training course materials. In the notice, FEMA stated that, rather than establish separate and perhaps duplicative requirements from those that may already be in place in the states, it had chosen to work with the states to implement the NFIP requirements through already established state licensing schemes for insurance agents. The notice did not specify how or when states were to begin implementing the NFIP training and education requirements. Thus, it is too early to tell the extent to which insurance agents will meet FEMA’s minimum standards. FEMA officials said that, because changes to the program could have broad reaching and significant effects on policyholders and private-sector stakeholders upon whom FEMA relies to implement the program, the agency is taking a measured approach to addressing the changes mandated by Congress. Nonetheless, without plans with milestones for completing its efforts to address the provisions of the act, FEMA cannot hold responsible officials accountable or ensure that statutorily required improvements are in place to assist victims of future flood events. We are recommending in today’s report that FEMA develop documented plans with milestones for implementing requirements of the Flood Insurance Reform Act of 2004 to provide policyholders a flood insurance claims handbook that meets statutory requirements, to establish a regulatory appeals process, and to ensure that flood insurance agents meet minimum NFIP education and training requirements. FEMA did not agree with our recommendations for both its sampling methodology and implementation of the requirements of the Flood Insurance Reform Act of 2004. It noted that its current sampling methodology of selecting a sample based on knowledge of the population to be sampled was more appropriate for identifying problems than the statistically random probability sample we recommended. Although FEMA’s current nonprobability sampling strategy may provide an opportunity to focus on particular areas of risk, it does not provide management with the information needed to assess the overall performance of private insurance companies and adjusters participating in the program—information that FEMA needs to have reasonable assurance that program objectives are being achieved. FEMA also disagreed with our characterization of the extent to which FEMA has met provisions of the Flood Insurance Reform Act of 2004. We believe that our description of those efforts and our recommendations with regard to implementing the Act’s provisions are valid. For example, although FEMA commented that it was offering claimants an informal appeals process in its flood insurance claims handbook, it must establish regulations for this process, and those are not yet complete. To the extent possible, the NFIP is designed to pay operating expenses and flood insurance claims with premiums collected on flood insurance policies rather than with tax dollars. However, as we have reported, the program, by design, is not actuarially sound because Congress authorized subsidized insurance rates to be made available for policies covering some properties to encourage communities to join the program. As a result, the program does not collect sufficient premium income to build reserves to meet the long-term future expected flood losses. FEMA has statutory authority to borrow funds from the Treasury to keep the NFIP solvent. Until the 2004 hurricane season, FEMA had been generally successful in keeping the NFIP on sound financial footing. It had exercised its authority to borrow from the Treasury three times in the last decade when losses were heavy and repaid all funds with interest. As of August 2005, the program had borrowed $300 million to cover more than $1.8 billion in claims from the major disasters of 2004, including hurricanes Charley, Frances, Ivan, and Jeanne, which hit Florida and other East and Gulf Coast states. The large number of claims arising from Hurricanes Katrina and Rita will require FEMA to borrow heavily from the Treasury, because the NFIP does not have the financial reserves necessary to offset heavy losses in the short-term. Following Hurricane Katrina in August 2005, legislation was enacted that increased FEMA’s borrowing authority from $1.5 billion to $3.5 billion through fiscal year 2008. Additional borrowing authority may be needed to pay claims arising from Hurricanes Katrina and Rita. In reauthorizing the NFIP in 2004, Congress noted that “repetitive-loss properties”—those that had resulted in two or more flood insurance claims payments of $1,000 or more over 10 years—constituted a significant drain on the resources of the NFIP. These repetitive loss properties are problematic not only because of their vulnerability to flooding but also because of the costs of repeatedly repairing flood damages. While these properties make up only about 1 percent of the properties insured under the NFIP, they account for 25 to 30 percent of all claims losses. At the time of our March 2004 report on repetitive loss properties, nearly half of all nationwide repetitive loss property insurance payments had been made in Louisiana, Texas, and Florida. According to a recent Congressional Research Service report, as of December 31, 2004, FEMA had identified 11,706 “severe repetitive loss” properties defined as those with four or more claims or two or three losses that exceeded the insured value of the property. Of these 11,706 properties almost half (49 percent) were in three states—3,208 (27 percent) in Louisiana, 1,573 (13 percent) in Texas, and 1,034 (9 percent) in New Jersey. As the destruction caused by horrendous 2004 and 2005 hurricanes are a driving force for improving the NFIP today, devastating natural disasters in the 1960s were a primary reason for the national interest in creating a federal flood insurance program. In 1963 and 1964, Hurricane Betsy and other hurricanes caused extensive damage in the South, and, in 1965, heavy flooding occurred on the upper Mississippi River. In studying insurance alternatives to disaster assistance for people suffering property losses in floods, a flood insurance feasibility study found that premium rates in certain flood-prone areas could be extremely high. As a result, the National Flood Insurance Act of 1968, which created the NFIP, mandated that existing buildings in flood-risk areas would receive subsidies on premiums because these structures were built before the flood risk was known and identified on flood insurance rate maps. Owners of structures built in flood-prone areas on or after the effective date of the first flood insurance rate maps in their areas or after December 31, 1974, would have to pay full actuarial rates. Because many repetitive loss properties were built before either December 31, 1974 or the effective date of the first flood insurance rate maps in their areas, they were eligible for subsidized premium rates under provisions of the National Flood Insurance Act of 1968. The provision of subsidized premiums encouraged communities to participate in the NFIP by adopting and agreeing to enforce state and community floodplain management regulations to reduce future flood damage. In April 2005, FEMA estimated that floodplain management regulations enforced by communities participating in the NFIP have prevented over $1.1 billion annually in flood damage. However, some of the properties that had received the initial rate subsidy are still in existence and subject to repetitive flood losses, thus placing a financial strain on the NFIP. For over a decade, FEMA has pursued a variety of strategies to reduce the number of repetitive loss properties in the NFIP. In a 2004 testimony, we noted that congressional proposals have been made to phase out coverage or begin charging full and actuarially based rates for repetitive loss property owners who refuse to accept FEMA’s offer to purchase or mitigate the effect of floods on these buildings. The 2004 Flood Insurance Reform Act created a 5-year pilot program to deal with repetitive-loss properties in the NFIP. In particular, the act authorized FEMA to provide financial assistance to participating states and communities to carry out mitigation activities or to purchase “severe repetitive loss properties.” During the pilot program, policyholders who refuse a mitigation or purchase offer that meets program requirements will be required to pay increased premium rates. In particular, the premium rates for these policyholders would increase by 150% following their refusal and another 150% following future claims of more than $1,500. However, the rates charged cannot exceed the applicable actuarial rate. It will be important in future studies of the NFIP to continue to analyze data on progress being made to reduce the inventory of subsidized NFIP repetitive loss properties, how the reduction of this inventory contributes to the financial stability of the program, and whether additional FEMA regulatory steps or congressional actions could contribute to the financial solvency of the NFIP, while meeting commitments made by the authorizing legislation. In 1973 and 1994, Congress enacted requirements for mandatory purchase of NFIP policies by some property owners in high risk areas. From 1968 until the adoption of the Flood Disaster Protection Act of 1973, the purchase of flood insurance was voluntary. However, because voluntary participation in the NFIP was low and many flood victims did not have insurance to repair damages from floods in the early 1970s, the 1973 act required the mandatory purchase of flood insurance to cover some structures in special flood hazard areas of communities participating in the program. Homeowners with mortgages from federally-regulated lenders on property in communities identified to be in special flood hazard areas are required to purchase flood insurance on their dwellings for the amount of their outstanding mortgage balance, up to a maximum of $250,000 in coverage for single family homes. The owners of properties with no mortgages or properties with mortgages held by lenders who are not federally regulated were not, and still are not, required to buy flood insurance, even if the properties are in special flood hazard areas—the areas NFIP flood maps identify as having the highest risk of flooding. FEMA determines flood risk and actuarial ratings on properties through flood insurance rate mapping and other considerations including the elevation of the lowest floor of the building, the type of building, the number of floors, and whether or not the building has a basement, among other factors. FEMA flood maps designate areas for risk of flooding by zones. For example, areas subject to damage by waves and storm surge are in zones with the highest expectation for flood loss. Between 1973 and 1994, many policyholders continued to find it easy to drop policies, even if the policies were required by lenders. Federal agency lenders and regulators did not appear to strongly enforce the mandatory flood insurance purchase requirements. According to a recent Congressional Research Service study, the Midwest flood of 1993 highlighted this problem and reinforced the idea that reforms were needed to compel lender compliance with the requirements of the 1973 Act. In response, Congress passed the National Flood Insurance Reform Act of 1994. Under the 1994 law, if the property owner failed to get the required coverage, lenders were required to purchase flood insurance on their behalf and then bill the property owners. Lenders became subject to civil monetary penalties for not enforcing the mandatory purchase requirement. In June 2002, we reported that the extent to which lenders were enforcing the mandatory purchase requirement was unknown. Officials involved with the flood insurance program developed contrasting viewpoints about whether lenders were complying with the flood insurance purchase requirements primarily because the officials used differing types of data to reach their conclusions. Federal bank regulators and lenders based their belief that lenders were generally complying with the NFIP’s purchase requirements on regulators’ examinations and reviews conducted to monitor and verify lender compliance. In contrast, FEMA officials believed that many lenders frequently were not complying with the requirements, which was an opinion based largely on noncompliance estimates computed from data on mortgages, flood zones, and insurance policies; limited studies on compliance; and anecdotal evidence indicating that insurance was not always in place where required. Neither side, however, was able to substantiate its differing claims with statistically sound data that provide a nationwide perspective on lender compliance. Accurate flood maps that identify the areas at greatest risk of flooding are the foundation of the NFIP. Flood maps must be periodically updated to assess and map changes in the boundaries of floodplains that result from community growth, development, erosion, and other factors that affect the boundaries of areas at risk of flooding. FEMA has embarked on a multi- year effort to update the nation’s flood maps at a cost in excess of $1 billion. The maps are principally used by (1) the approximately 20,000 communities participating in the NFIP to adopt and enforce the program’s minimum building standards for new construction within the maps’ identified flood plains; (2) FEMA to develop accurate flood insurance policy rates based on flood risk, and (3) federal regulated mortgage lenders to identify those property owners who are statutorily required to purchase federal flood insurance. Under the NFIP, property owners whose properties are within the designated “100-year floodplain” and have a mortgage from a federally regulated financial institution are required to purchase flood insurance in an amount equal to their outstanding mortgage balance (up to the statutory ceiling of $250,000). FEMA expects that by producing more accurate and accessible digital flood maps, the NFIP and the nation will benefit in three ways. First, communities can use more accurate digital maps to reduce flood risk within floodplains by more effectively regulating development through zoning and building standard. Second, accurate digital maps available on the Internet will facilitate the identification of property owners who are statutorily required to obtain or who would be best served by obtaining flood insurance. Third, accurate and precise data will help national, state, and local officials to accurately locate infrastructure and transportation systems (e.g., power plants, sewage plants, railroads, bridges, and ports) to help mitigate and manage risk for multiple hazards, both natural and man-made. Success in updating the nation’s flood maps requires clear standards for map development; the coordinated efforts and shared resources of federal, state, and local governments; and the involvement of key stakeholders who will be expected to use the maps. In developing the new data system to update flood maps across the nation, FEMA’s intent is to develop and incorporate flood risk data that are of a level of specificity and accuracy commensurate with communities’ relative flood risks. Not every community may need the same level of specificity and detail in its new flood maps. However, it is important that FEMA establish standards for the appropriate data and level of analysis required to develop maps for all communities of a similar risk level. In its November 2004 Multi-Year Flood Hazard Identification Plan, FEMA discussed the varying types of data collection and analysis techniques the agency plans to use to develop flood hazard data in order to relate the level of study and level of risk for each of 3,146 counties. FEMA has developed targets for resource contribution (in-kind as well as dollars) by its state and local partners in updating the nation’s flood maps. At the same time, it has developed plans for reaching out to and including the input of communities and key stakeholders in the development of the new maps. These expanded outreach efforts reflect FEMA’s understanding that it is dependent upon others to achieve the benefits of map modernization. The most immediate challenge for the NFIP is processing the flood insurance claims resulting from Hurricanes Katrina and Rita. FEMA reported, as of October 13th, that it had received 192,809 flood insurance claims and had paid nearly $1.3 billion to settle 7,664 of these claims. The number of claims is more than twice as many as were filed in all of 2004, itself a record year. The need for effective communication and consistent and appropriate application of policy provisions will be particularly important in working with anxious policyholders, many of whom have been displaced from their homes. In the longer term, Congress and the NFIP face a complex challenge in assessing potential changes to the program that would improve its financial stability, increase participation in the program by property owners in areas at risk of flooding, reduce the number of repetitive loss properties in the program, and maintain current and accurate flood plain maps. These issues are complex, interrelated, and are likely to involve trade-offs. For example, increasing premiums to better reflect risk may reduce voluntary participation in the program or encourage those who are required to purchase flood insurance to limit their coverage to the minimum required amount (i.e., the amount of their outstanding mortgage balance). This in turn can increase taxpayer exposure for disaster assistance resulting from flooding. There is no “silver bullet” for improving the current structure and operations of the NFIP. It will require sound data and analysis and the cooperation and participation of many stakeholders. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions you and the Committee Members may have. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Norman Rabkin at (202) 512-8777 or at [email protected], or William O. Jenkins, Jr. at (202) 512-8757 or at [email protected]. This statement was prepared under the direction of Christopher Keisling. Key contributors were Amy Bernstein, Christine Davis, Deborah Knorr, Denise McCabe, and Margaret Vo. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The disastrous hurricanes that have struck the Gulf Coast and Eastern seaboard in recent years--including Katrina, Rita, Ivan, and Isabel--have focused attention on federal flood management efforts. The National Flood Insurance Program (NFIP), established in 1968, provides property owners with some insurance coverage for flood damage. The Federal Emergency Management Agency (FEMA) within the Department of Homeland Security is responsible for managing the NFIP. GAO issues a report earlier this week that was mandated by the Flood Insurance Reform Act of 2004. This testimony discusses findings and recommendations from that report and information from past GAO work. Specifically, the testimony discusses (1) the statutory and regulatory limitations on coverage for homeowners under the NFIP; (2) FEMA's role in monitoring and overseeing the NFIP; (3) the status of FEMA's implementation of provisions of the Flood Insurance Reform Act of 2004. It also offers observations on broader issues facing the NFIP including its financial structure and updating flood maps. The amount of insurance coverage available to homeowners under the NFIP is limited by requirements set forth in statute and FEMA's implementing regulations, which include FEMA's standard flood insurance policy. As a result of these limitations, insurance payments to claimants for flood damage may not cover all of the costs of repairing or replacing flood-damaged property. For example, homes that could sustain more than $250,000 in damage cannot be insured to their full replacement cost, thus limiting claims to this statutory ceiling. In addition, NFIP policies cover only direct physical loss by or from flood. Therefore, losses resulting primarily from a preexisting structural weakness in a home, or losses resulting from events other than flood such as windstorms, are not covered by NFIP policies. To meet its monitoring and oversight responsibilities, FEMA is to conduct periodic operational reviews of the 95 private insurance companies that participate in the NFIP, and FEMA's program contractor is to check the accuracy or claims settlements by doing quality assurance reinspections of a sample of claims adjustments for every flood event. FEMA did not use a statistically valid method for sampling files to be reviewed in these monitoring and oversight activities. As a result, FEMA cannot project the results of these reviews to determine the overall accuracy of claims settled for specific flood events or assess the overall performance of insurance companies and their adjusters in fulfilling responsibilities for the NFIP--actions necessary for FEMA to have reasonable assurance that program objectives are being achieved. FEMA has not yet fully implemented provisions of the Flood Insurance Reform Act of 2004 requiring the agency to provide policyholders with a flood insurance claims handbook that meets statutory requirements, to establish a regulatory appeals process, and to ensure that insurance agents meet minimum education and training requirements. The statutory deadline for implementing these changes was December 30, 2004. Efforts to implement the provisions are under way, but have not yet been completed. DEMA has not developed plans with milestones for assigning accountability and projecting when program improvements will be made, so that improvements are in place to assist victims of future flood events. As GAO has previously reported, the NFIP, by design, is not actuarially sound. The program does not collect sufficient premium income to build reserves to meet long-term future expected flood losses, in part because Congress authorized subsidized insurance rates to be made available for some properties. FEMA has generally been successful in keeping the NFIP on a sound financial footing, but the catastrophic flooding events of 2004 (involving 4 major hurricanes) required FEMA, as of August 2005, to borrow $300 million from the U.S. Treasury to help pay an estimated $1.8 billion on flood insurance claims. Following Hurricane Katrina in August 2005, legislation was enacted to increase FEMA's borrowing authority from $1.5 billion to $3.5 billion through fiscal year 2008.
The Internal Revenue Manual (IRM) describes the desired outcome of an income tax audit as the determination of the correct taxable income and tax liability of the person or entity under audit. In making these determinations, the auditor has a responsibility to both the audited taxpayer and all other taxpayers to conduct a quality audit. IRS uses nine audit standards, which have evolved since the 1960s, to define audit quality. These standards address several issues, such as the scope, techniques, technical conclusions, reports, and time management of an audit, as well as workpaper preparation. Each standard has one or more key elements. (See table I.1 in app. I for a list of these standards and their associated key elements.) Workpapers provide documentation on the scope of the audit and the diligence with which it was completed. According to the IRM, audit workpapers (1) assist in planning the audit; (2) record the procedures applied, tests performed, and evidence gathered; (3) provide support for technical conclusions; and (4) provide the basis for review by management. Audit workpapers also provide the principal support for the auditor’s report, which is to be provided to the audited taxpayer, on findings and conclusions about the taxpayer’s correct tax liability. The primary tool used by IRS to control quality under the nine standards is the review of ongoing audit work. This review is the responsibility of IRS’ first-line supervisors, called group managers, who are responsible for the quality of audits done by the auditors they manage. By reviewing audit workpapers during the audit, group managers attempt to identify problems with audit quality and ensure that the problems are corrected. After an audit closes, IRS uses its Examination Quality Measurement System (EQMS) to collect information about the audit process, changes to the process, level of audit quality, and success of any efforts to improve the process and quality. EQMS staff are to review audit workpapers and assess the degree to which the auditor complied with the audit standards. To pass a standard, the audit must pass all of the key elements. Our observations about the adequacy of the audit workpapers and supervisory review during audits are based on our work during 1996 and 1997 on IRS’ use of financial status audit techniques. Among other things, this work relied on a random sample of individual tax returns that IRS had audited. This sample excluded audits that were unlikely to use financial status audit techniques because the audit did not look at individual taxpayers’ books and records. Such excluded audits involved those done at service centers and those that only passed through various types of tax adjustments from other activities (e.g., partnership audits and refund claims). This random sample included 354 audits from a population of about 421,000 audits that were opened from October 1994 through October 1995 and closed in fiscal years 1995 or 1996. Each audit covered one or more individual income tax returns. The sample of audits from our previous work focused on the frequency in which IRS auditors used financial status audit techniques, rather than on the adequacy of audit workpapers. Consequently, we did not do the work necessary to estimate the extent to which workpapers met IRS’ workpaper standard for the general population of audits. However, our work did identify several cases in which audit workpapers in our sample did not meet IRS’ workpaper standard. We held follow-up discussions about the workpaper and supervisory review requirements, as well as about our observations, with IRS Examination Division officials. On the basis of these discussions, we agreed to check for documentation of group manager involvement by examining employee performance files for nine of our sample audits conducted out of IRS’ Northern California District Office to get a better idea of how the group managers handle their audit inventories and ensure quality. According to IRS officials, these files may contain documentation on case reviews by group managers even though such documentation may not be in the workpapers. We requested comments on a draft of this report from the Commissioner of Internal Revenue. On March 27, 1998, we received written comments from IRS, which are summarized at the end of this letter and are reproduced in appendix II. These comments have been incorporated into the report where appropriate. We did our work at IRS headquarters in Washington, D.C., and at district offices and service centers in Fresno and Oakland, CA; Baltimore, MD; Philadelphia, PA; and Richmond, VA. Our work was done between January and March, 1998, in accordance with generally accepted government auditing standards. One of IRS’ audit standards covers audit workpapers. In general, IRS requires the audit workpapers to support the auditor’s conclusions that were reached during an audit. On the basis of our review of IRS’ audit workpapers, we found that IRS auditors did not always meet the requirements laid out under this workpaper standard. IRS’ workpaper standard requires that workpapers provide the principal support for the auditor’s report and document the procedures applied, tests performed, information obtained, and conclusions reached. The five key elements for this workpaper standard involve (1) fully disclosing the audit trail and techniques used; (2) being clear, concise, legible, and organized and ensuring that workpaper documents have been initialed, labeled, dated, and indexed; (3) ensuring that tax adjustments recorded in the workpapers agree with IRS Forms 4318 or 4700 and the audit report; (4) adequately documenting the audit activity records; and (5) appropriately protecting taxpayers’ rights to privacy and confidentiality. The following are examples of some of the problems we found during our review of IRS audit workpapers: Tax adjustments shown in the workpapers, summaries, and reports did not agree. For example, in one audit, the report sent to the taxpayer showed adjustments for dependent exemptions and Schedule A deductions. However, neither the workpaper summary nor the workpapers included these adjustments. In another audit, the workpaper summary showed adjustments of about $25,000 in unreported wages, but the report sent to the taxpayer showed adjustments of only about $9,000 to Schedule C expenses. Required documents or summaries were not always in the workpaper bundle. For example, we found instances of missing or incomplete activity records and missing workpaper summaries. Workpapers that were in the bundle were not always legible or complete. The required information that was missing included the workpaper number, tax year being audited, date of the workpaper, and auditor’s name or initials. Although we are unable to develop estimates of the overall quality of audit workpapers, IRS has historically found problems with the quality of its workpapers. This observation is supported by evaluations conducted as part of IRS’ EQMS, which during the past 6 years (1992-97) indicated that IRS auditors met all of the key elements of the workpaper standard in no more than 72 percent of the audits. Table 1 shows the percentage of audits reviewed under EQMS that met all the key elements of the workpaper standard. The success rate, as depicted in table 1, indicates whether all of the key elements within the standard were met. That is, if any one element is not met, the standard is not met. Another indicator of the quality of the audit workpapers is how often each element within a standard meets the criteria of that element. Table I.2 in appendix I shows this rate, which IRS calls the pass rate, for the key elements of the workpaper standard. Workpapers are an important part of the audit effort. They are a tool to use in formulating and documenting the auditor’s findings, conclusions, and recommended adjustments, if any. Workpapers are also used by third-party reviewers as quality control and measurement instruments. Documentation of the auditor’s methodology and support for the recommended tax adjustments are especially important when the taxpayer does not agree with the recommendations. In these cases, the workpapers are to be used to make decisions about how much additional tax is owed by the taxpayer. Inadequate workpapers may result in having the auditor do more work or even in having the recommended adjustment overturned. IRS’ primary quality control mechanism is supervisory review of the audit workpapers to ensure adherence to the audit standards. However, our review of the workpapers in the sampled audits uncovered limited documentation of supervisory review. As a result, the files lacked documentation that IRS group managers reviewed workpapers during the audits to help ensure that the recommended tax adjustments were supported and verified, and that the audits did not unnecessarily burden the audited taxpayers. The IRM requires that group managers review the audit work to assess quality and ensure that audit standards are being met, but it does not indicate how or when such reviews should be conducted. However, the IRM does not require that documentation of this review be maintained in the audit files. We found little documentation in the workpapers that group managers reviewed workpapers before sharing the audit results with the taxpayer. In analyzing the sampled audits, we recorded whether the workpapers contained documentation that a supervisor had reviewed the workpapers during the audit. We counted an audit as having documentation of being reviewed if the group manager made notations in the workpapers on the audit findings or results; we also counted audits in which the workpapers made some reference to a discussion with the group manager about the audit findings. On the basis of our analysis of the sampled audits closed during fiscal years 1995 and 1996, we estimated that about 6 percent of the workpapers in the sample population contained documentation of group manager review during the audits. In discussions about our estimate with IRS Examination Division officials, they noted that all unagreed audits (i.e., those audits in which the taxpayers do not agree with the tax adjustments) are to be reviewed by the group managers, and they pointed to the manager’s initials on the notice of deficiency as documentation of this review. We did not count reviews of these notices in our analysis because they occurred after IRS sent the original audit report to the taxpayer. If we assume that workpapers for all unagreed audits were reviewed, our estimate on the percentage of workpapers with documentation of being reviewed increases from 6 percent to about 26 percent. Further, we analyzed all unagreed audits in our sample to see how many had documentation of group manager review during the audit, rather than after the audit results were sent to the taxpayer; this would be the point at which the taxpayer either would agree or disagree with the results. We found documentation of such a review in 12 percent of the unagreed audits. The Examination Division officials also said that a group manager may review the workpapers without documentation of that review being recorded in the workpapers. Further, they said that group managers had limited time to review workpapers due to many other responsibilities. The officials also told us that group managers can be involved with audits through means other than review of the workpapers. They explained that these managers monitor their caseload through various processes, such as evaluations of auditors’ performance during or after an audit closes, monthly discussions with auditors about their inventory of audits, reviews of auditors’ time charges, reviews of audits that have been open the longest, and visits to auditors located outside of the district office. The Examination Division officials also noted that any time the audit is expanded, such as by selecting another of the taxpayer’s returns or adding a related taxpayer or return, this action must be approved by the group manager. According to these officials, these other processes may involve a review of audit workpapers, but not necessarily during the audit. We agreed that we would check for documentation of these other processes in our nine sample audits from IRS’ District Office located in Oakland. We found documentation of workload reviews for one of these nine sample audits. In these monthly workload reviews, supervisors are to monitor time charges to an audit. In one other audit, documentation showed that a special unit within the Examination Division reviewed and made changes to the form used to record data for input into IRS’ closed audits database. However, none of this documentation showed supervisory review of the audit workpapers. If any other forms of supervisory involvement with these audits had occurred, the documentation either had been removed from the employee performance file as part of IRS’ standard procedure or was not maintained in a way that we could relate it back to a specific taxpayer. As a result, we do not know how frequently these other processes for supervisory involvement occurred and whether substantive reviews of the audits were part of these processes. IRS is currently drafting changes to the IRM relating to workpapers. In the draft instructions, managers are required to document managerial involvement. This documentation may include signatures, notations in the activity record, or summaries of discussions in the workpapers. When completed, this section is to become part of the IRM’s section on examination of returns. According to an IRS official, comments from IRS’ field offices on the draft changes are not due into headquarters until May 1998. IRS audits tax returns to ensure that taxpayers pay the correct amount of tax. If auditors do quality work, IRS is more likely to meet this goal while minimizing the burden on taxpayers. Quality audits should also encourage taxpayers to comply voluntarily. Supervisory review during the audits is a primary tool in IRS’ efforts to control quality. IRS requires group managers to ensure the quality of the audits, leaving much discretion on the frequency and nature of their reviews during an audit. IRS officials noted that group managers are to review workpapers if taxpayers disagree with the auditor’s report on any recommended taxes. The IRM does not specifically require that all of these supervisory reviews be documented in the workpapers, even though generally accepted government auditing standards do require such documentation. However, recent draft changes to the IRM may address this issue by requiring such documentation. We found little documentation of such supervisory reviews, even though these reviews can help to avoid various problems. For example, supervisory review could identify areas that contribute to IRS’ continuing problems in creating audit workpapers that meet its standard for quality. Since fiscal year 1992, the quality of workpapers has been found wanting by IRS’ EQMS. Inadequately documented workpapers raise questions about whether supervisory review is controlling audit quality as intended. These questions cannot be answered conclusively, however, because the amount of supervisory review cannot be determined. The lack of documentation on workpaper review raises questions about the extent of supervisory involvement with the audits. Proposed changes to the IRM’s sections on examination of returns require documentation of management involvement in the audit process. We recommend that the IRS Commissioner require audit supervisors to document their review of audit workpapers as a control over the quality of audits and the associated workpapers. On March 25, 1998, we met with IRS officials to obtain comments on a draft of this report. These officials included the Acting Deputy Chief Compliance Officer, the Assistant Commissioner for Examination and members of his staff, and a representative from IRS’ Office of Legislative Affairs. IRS documented its comments in a March 27, 1998, letter from the IRS Commissioner, which we have reprinted in appendix II. In this letter, IRS agreed to make revisions to the IRM instructions for the purpose of implementing our recommendation by October 1998. The letter included an appendix outlining adoption plans. The IRS letter also expressed two concerns with our draft report. First, IRS said our conclusion about the lack of evidence of supervisory review of audit workpapers was somewhat misleading and pointed to examples of other managerial practices, such as on-the-job visitations, to provide oversight and involvement in cases. We do not believe our draft report was misleading. As IRS acknowledges in its letter, when discussing the lack of documentation of supervisory review, we also described these other managerial practices. Second, IRS was concerned that our draft report appeared to consider these other managerial practices insufficient. Our draft report did not discuss the sufficiency of these practices but focused on the lack of documentation of supervisory review, including these other managerial practices. We continue to believe that documentation of supervisory review of workpapers is needed to help ensure quality control over the workpapers and audits. At the March 25, 1998, meeting, IRS provided technical comments to clarify specific sections of the draft report that described IRS processes. IRS officials also discussed the distinction between supervisory review and documentation of that review. We have incorporated these comments into this report where appropriate. We are sending copies of this report to the Subcommittee’s Ranking Minority Member, the Chairmen and Ranking Minority Members of the House Ways and Means Committee and the Senate Committee on Finance, various other congressional committees, the Director of the Office of Management and Budget, the Secretary of the Treasury, the IRS Commissioner, and other interested parties. We will also make copies available to others upon request. Major contributors to this report are listed in appendix III. If you have any questions concerning this report, please contact me at (202) 512-9110. The Office of Compliance Specialization, within the Internal Revenue Service’s (IRS) Examination Division, has responsibility for Quality Measurement Staff operations and the Examination Quality Measurement System (EQMS). Among other uses, IRS uses EQMS to measure the quality of closed audits against nine IRS audit standards. The standards address the scope, audit techniques, technical conclusions, workpaper preparation, reports, and time management of an audit. Each standard includes additional key elements describing specific components of a quality audit. Table I.1 summarizes the standards and the associated key elements. Table I.1: Summary of IRS’ Examination Quality Measurement System Auditing Standards (as of Oct. 1996) Measures whether consideration was given to the large, unusual, or questionable items in both the precontact stage and during the course of the examination. This standard encompasses, but is not limited to, the following fundamental considerations: absolute dollar value, relative dollar value, multiyear comparisons, intent to mislead, industry/business practices, compliance impact, and so forth. Measures whether the steps taken verified that the proper amount of income was reported. Gross receipts were probed during the course of examination, regardless of whether the taxpayer maintained a double entry set of books. Consideration was given to responses to interview questions, the financial status analysis, tax return information, and the books and records in probing for unreported income. Measures whether consideration was given to filing and examination potential of all returns required by the taxpayer including those entities in taxpayer’s sphere of influence/responsibility. Required filing checks consist of the analysis of return information and, when warranted, the pick-up of related, prior and subsequent year returns. In accordance with Internal Revenue Manual 4034, examinations should include checks for filing information returns. (continued) Measures whether the issues examined were completed to the extent necessary to provide sufficient information to determine substantially correct tax. The depth of the examination was determined through inspection, inquiry, interviews, observation, and analysis of appropriate documents, ledgers, journals, oral testimony, third-party records, etc., to ensure full development of relevant facts concerning the issues of merit. Interviews provided information not available from documents to obtain an understanding of the taxpayer’s financial history, business operations, and accounting records in order to evaluate the accuracy of books/records. Specialists provided expertise to ensure proper development of unique or complex issues. Measures whether the conclusions reached were based on a correct application of tax law. This standard includes consideration of applicable law, regulations, court cases, revenue rulings, etc. to support technical/factual conclusions. Measures whether applicable penalties were considered and applied correctly. Consideration of the application of appropriate penalties during all examination is required. Measures the documentation of the examination’s audit trail and techniques used. Workpapers provided the principal support for the examiner’s report and documented the procedures applied, tests performed, information obtained, and the conclusions reached in the examination. Measures the presentation of the audit findings in terms of content, format, and accuracy. Addresses the written presentation of audit findings in terms of content, format, and accuracy. All necessary information is contained in the report, so that there is a clear understanding of the adjustments made and the reasons for those adjustments. Measures the utilization of time as it relates to the complete audit process. Time is an essential element of the Auditing Standards and is a proper consideration in analyses of the examination process. The process is considered as a whole and at examination initiation, examination activities, and case closing stages. (Table notes on next page) IRS uses the key element pass rate as one measure of audit quality. This measure computes the percentage of audits demonstrating the characteristics defined by the key element. According to IRS, the key element pass rate is the most sensitive measurement and is useful when describing how an audit is flawed, establishing a baseline for improvement, and identifying systemic changes. Table I.2 shows the pass rates for the key elements of the workpaper standard for fiscal years 1992 through 1997 for office and field audits. Table I.2: Key Element Pass Rate for EQMS Workpaper Standard for District Audits From Fiscal Years 1992-97 Key element pass rate by fiscal year1995 (10/94-3/95) 1995 (4/95-9/95) Legend: n/a = not applicable The key element “Disclosure” was added in the middle of fiscal year 1995. Kathleen E. Seymour, Evaluator-in-Charge Louis G. Roberts, Senior Evaluator Samuel H. Scrutchins, Senior Data Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the condition of the Internal Revenue Service's (IRS) audit workpapers, including the documentation of supervisory review. GAO noted that: (1) during its review of IRS' financial status audits, the workpapers did not always meet the requirements under IRS' workpaper standards; (2) standards not met in some audit workpapers included the expectation that: (a) the amount of tax adjustments recorded in the workpapers would be the same as the adjustment amounts shown in the auditor's workpaper summary and on the report sent to the taxpayer; and (b) the workpaper files would contain all required documents to support conclusions about tax liability that an auditor reached and reported to the taxpayer; (3) these shortcomings with the workpapers are not new; (4) GAO found documentation on supervisory review of workpapers prepared during the audits in an estimated 6 percent of the audits in GAO's sample; (5) in the remaining audits, GAO found no documentation that the group managers reviewed either the support for the tax adjustments or the report communicating such adjustments to the taxpayer; (6) IRS officials indicated that all audits in which the taxpayer does not agree with the recommended adjustments are to be reviewed by the group managers; (7) if done, this review would occur after the report on audit results was sent to the taxpayer; (8) even when GAO counts all such unagreed audits, those with documentation of supervisory review would be an estimated 26 percent of the audits in GAO's sample population; (9) GAO believes that supervisory reviews and documentation of such reviews are important because they are IRS' primary quality control process; (10) proper reviews done during the audit can help ensure that audits minimize burden on taxpayers and that any adjustments to taxpayers' liabilities are supported; (11) although Examination Division officials recognized the need for proper reviews, they said IRS group managers cannot review workpapers for all audits because of competing priorities; (12) these officials also said that group managers get involved in the audit process in ways that may not be documented in the workpapers; (13) they stated that these group managers monitor auditors' activities through other processes, such as by reviewing the time that auditors spent on an audit, conducting on-the-job visits, and talking to auditors about their cases and audit inventory; and (14) in these processes, however, the officials said that group managers usually were not reviewing workpapers or validating the calculations used to recommend adjustments before sending the audit results to the taxpayer.
During congressional testimony in early 1996, the Army Chief of Staff requested funds from Congress to speed up the fielding of urgently needed new technologies to the soldier. The Chief of Staff stressed that Congress and the Army could accelerate the development of new technologies by making funds available more quickly than is normally required in the budget process for new programs. The Army proposed WRAP as a tool that would help jump-start technologies that were still under development but nearing the production phase. These new technologies were being tested in Army experiments designed to support a new warfighting concept called Force XXI. Force XXI embodies the Army’s vision of how military operations will be carried out in the 21st century and relies heavily on the fielding in the year 2000 of the 4th Infantry Division, the Army’s first digitized division. The Army selected technologies slated for WRAP funding from those tested in the Task Force XXI Advanced Warfighting Experiment (AWE), completed in March 1997 and carried out to support the first digitized division. Congress added $50 million to the Army’s fiscal year 1997 budget. The money eventually funded the first 11 WRAP initiatives. However, the House Committee on Appropriations, in its report on the fiscal year 1997 defense appropriations bill, expressed concern that WRAP funds might be used for limited fielding of unbudgeted items that had not competed for funds and would not be affordable in future budgets. Therefore, it required notification to the defense committees prior to the obligation of WRAP funds and stipulated that these funds could not be used to field interim Land Warrior prototypes. When it established the program in early 1996, the Army planned to request $100 million per year from fiscal year 1998 to 2003. In its guidance for the program, the Army established the condition that these funds could not be used for technologies requiring “indefinite experimentation” and that WRAP candidates must be a compelling experimental success, urgently needed, ready for production within 2 years, and sufficiently funded in the out-years. Technologies requiring “continued experimentation” were to be allowed to receive WRAP funding. According to the Army, these differ from technologies needing indefinite experimentation in that they are not mature but are expected to start production within 2 years. Selected initiatives are funded from the Force XXI Initiatives (WRAP) budget, which is a holding account created expressly for WRAP initiatives. In fiscal year 1998, Congress appropriated $99.9 million for WRAP: $61 million for the second year of the first 11 initiatives and $38.9 million for the first year of new 1998-99 initiatives. However, recent actions taken by the Department of Defense (DOD) and the congressional appropriations committees will affect funding for new initiatives. For example, DOD reduced fiscal year 1998 WRAP funding for 1998-99 initiatives to $8.6 million (see app. I), and the appropriations conference committee reduced fiscal year 1999 WRAP funding by $35 million to $64.5 million. On July 16, 1998, the Army submitted 6 new candidates for funding in fiscal years 1998-99 and 4 new ones for funding in fiscal years 1999-2000 (detailed descriptions of the 21 initiatives and candidates are in app. I). On September 25, 1998, the appropriations conference committee denied funding for two of the four fiscal year 1999-2000 candidates. The Army plans to submit additional fiscal year 1999-2000 candidates by December 1998. The Army is also required by the Senate Armed Services Committee to submit quarterly reports on the status of obligated funds. WRAP has experienced growing pains in its first 2 years. While evolving, the program has lacked focus in the selection of initiatives. The assumptions and expectations that drove WRAP at its inception have not been clearly stated. As a result, we were unable to determine whether the results are consistent with congressional intent. However, we found that (1) some initiatives do not support the first digitized division, although the Army initially justified WRAP funding on the basis of the need to urgently field technologies associated with the first digitized division; (2) funds have been used both for production items and development work; and (3) future initiatives may not have sufficient test data for proper evaluation. Furthermore, the Army is still trying to refine its selection process so as to avoid the delays that so far have hindered the program’s implementation. Meanwhile, Congress is not being informed of the program’s progress or of changes in some ongoing initiatives. WRAP criteria for selection of initiatives allow considerable room for interpretation. Therefore, the WRAP initiatives funded so far are quite different from each other. Some initiatives did not meet all the Army’s criteria for WRAP funding, and others will not be fielded with the first digitized division in 2000. They were approved, however, because they fit the general description of urgently needed new technologies that the Army is trying to field as quickly as possible. WRAP funds were also used to purchase production items rather than to develop new technologies. Neither congressional restrictions nor the Army’s criteria specify whether WRAP funds should be used only to support the Army’s first digitized division. However, the Army initially justified WRAP funding on the basis of the urgent need to field technologies associated with the first digitized division, and appropriation of that funding occurred in a strategic environment dominated by development of the first digitized division. For example, the Task Force XXI AWE was carried out to support the digitized division, the first 11 WRAP initiatives were tested in the Task Force XXI AWE, the Army’s Training and Doctrine Command (TRADOC) cited support for the first digitized division as the top priority when selecting WRAP candidates, about two thirds of fiscal year 1997 funding was for initiatives that support the first digitized division, and the Army initially placed WRAP funds under the digitization budget before establishing a separate Force XXI initiatives budget. There is disagreement within the Army about whether WRAP should be directly linked to the first digitized division. An Operational Test and Evaluation Command (OPTEC) official believes that WRAP is directly related to digitization, while the Director of the Acquisition Reform Reinvention Lab, Office of the Assistant Secretary of the Army for Research, Development, and Acquisition, believes that WRAP is an acquisition streamlining tool that may or may not support digitization. He views WRAP as part of the Army’s efforts to field needed technologies more rapidly, regardless of their relationship to the digitized division. We found that 3 of the first 11 initiatives, accounting for about one third of all WRAP funds, will not be part of the first digitized division. These initiatives, the Mortar Fire Control System, the Gun Laying and Positioning System, and the Avenger Slew-to-Cue, together received $14.3 million in WRAP funds in fiscal year 1997 and are slated to receive $22.5 million in fiscal year 1998. However, all six of the WRAP candidates submitted for fiscal years 1998-99 funding are considered critical for the first digitized division. Two initiatives, Applique and Tactical Internet, did not meet the Army’s criterion that WRAP candidates be ready for production within 2 years, but as the backbone of the Army’s first digitized division, they were justified on the basis of urgent need. Both were approved as continued experimentation initiatives and are not expected to begin production until fiscal year 2004. An OPTEC official told us that other initiatives were clearly closer to fielding but that the Army approved Applique and Tactical Internet because it believed they were worth the expense of additional development work. They received $12.3 million (about 26 percent) of the $47.7 million of fiscal year 1997 WRAP funds and will receive $8.6 million (about 14 percent) of the $61 million of fiscal year 1998 WRAP funds. WRAP funds have also been used to purchase substantial quantities of production items (finished products ready for fielding). The Army allocated $17.6 million of $61 million (about 29 percent) of WRAP funds in fiscal year 1998 to procure production items. For example, the Army will use 1998 WRAP funds to procure 432 Movement Tracking Systems, enough to fully equip 2 Army divisions. WRAP was created to help jump-start new technologies that require developmental work and that must be fielded quickly. But production items by definition do not require further testing or development. Army criteria allow the use of WRAP funds for operational prototypes but do not specify what distinguishes a prototype from a finished production item. In our opinion, using WRAP funds to purchase large quantities of finished products (more than are needed for operational prototypes) is not consistent with the WRAP goal of developing new technologies until they are ready for production. In response to our questions about this issue, the Director of the Acquisition Reform Reinvention Lab told us that the Army now acknowledges that the practice should be discontinued. The Army has not scheduled any AWEs through 1999 to test new technologies. Consequently, it may be forced to rely increasingly on candidates that have not proven themselves through prior testing, require long-term experimentation, or may not be ready to begin production within 2 years. Officials have expressed concern that this approach may eventually lead to candidates that are less developed and take longer to field. Some approved initiatives have not been proven in testing and are less developed. While only 2 of 11 WRAP initiatives in fiscal years 1997-98 were defined as requiring continued experimentation, 3 of 10 candidates in fiscal years 1998-99 fell into this category. OPTEC was the lead evaluator for the Task Force XXI AWE. It evaluated the 72 participating initiatives and prepared ratings for 13 WRAP candidates.However, two of the three new continued experimentation WRAP candidates (Close Combat Tactical Trainer XXI and Global Combat Service Support System-Army) have not provided enough test and experimentation data to allow OPTEC to carry out a thorough evaluation and rating. They were still unrated as of July 1998. On September 25, 1998, the appropriations conference committee denied funding for both candidates. OPTEC may decline to issue a rating if it does not have enough data to conclude that the candidate is a compelling experimental success as required by Army criteria. An OPTEC official said that the Army will find it increasingly difficult to demonstrate such success because it has not scheduled any AWEs or similar large-scale exercises through fiscal year 1999. Without AWEs, he added, it will be difficult to find new candidates at the same level of development and experimental testing as the first group of candidates. He said that evaluation criteria may need to be changed to introduce other ways of qualifying candidates. In our opinion, this could result in more candidates that need continued experimentation. Meanwhile, the Army is trying to fill the gap created by the absence of AWEs. The Director of the Army Acquisition Reform Reinvention Lab said the Army is seeking alternatives to AWEs to expand its pool of WRAP candidates. The alternatives could include advanced technology and advanced concept technology demonstrations, concept experimentation programs, and battle lab warfighting experiments. Such candidate technologies could then use WRAP funds to move more quickly through development and into production. However, in our opinion, these demonstrations may involve technologies that require lengthy testing and experimentation. The key to securing timely congressional approval of WRAP candidates is the Army’s ability to finalize its selection early enough in the budget cycle. To date, this has not happened. In requesting the release of fiscal year 1997 funds from DOD, the Army did not initially justify the need for or indicate the ultimate destination of the funds, delaying the start-up and implementation of programs. As a result, approval of WRAP funds was delayed until very late in the fiscal year (see app. II for a description of the Army’s process for WRAP candidate selection). Additionally, funding reductions have also affected implementation. In the end, for most initiatives, WRAP probably will not speed up fielding as much as initially hoped. The 1997 WRAP selection and approval process lasted most of fiscal year 1997. The Army narrowed its list of candidates from 300 to 15 and made its final selection after reviewing the results of a March 1997 Task Force XXI AWE evaluation. The Army did not present the final 11 candidates to Congress until May 30, 1997. But even after the candidates were selected, DOD withheld $47.7 million for several months in fiscal year 1997 because the Army had not clearly stated which programs would receive the funds and how the funds would be used. DOD released $17.5 million of the funds in August 1997 and the remainder in late September 1997. In fiscal year 1998, DOD again withheld funds, saying it wanted to be certain they were needed. As of October 8, 1998, $36.9 million of fiscal year 1998 funds still had not been released. The Army has been trying to speed up its selection process in order to receive WRAP funds earlier in the fiscal year, but with little success. The fiscal year 1998 selection process took even longer than it had the previous year and the Army did not present its list of candidates to Congress until July 1998. This time the process was reportedly delayed by continuing debate within the Army over candidates, insufficient test data, and indecision about whether to submit candidates all at once or in batches, as they were selected. The Army has acknowledged the need to start candidate selection earlier. For fiscal year 1999, it plans to convene the next Army Systems Acquisition Review Council in November 1998, 2 months earlier than the previous year, and submit the last batch of 1999 candidates to Congress no later than December 1998. Funding cuts by DOD also affected the program. DOD reprogrammed WRAP funds to other operations, such as the Small Business Innovation Research Program. In fiscal years 1997 and 1998, DOD reprogrammed $2.3 million and $5 million, respectively, from WRAP to other programs. In addition, a June 1998 omnibus reprogramming action further reduced fiscal year 1998 WRAP funds for new initiatives by $27.8 million, leaving funding for new initiatives at $8.6 million. Army Airborne Command and Control System officials estimated that the loss of about $0.6 million of an $11 million WRAP allocation in fiscal year 1998 could delay the program by about 3 months. In another program, officials agreed that even losses as small as $0.2 million can have a negative effect on program plans. Although there have been delays, we believe that many WRAP-funded technologies may be fielded sooner because of the program. The Army initially estimated that 9 of the first 11 WRAP initiatives would accelerate the fielding of new technologies by an average of about 20 months. In its justification to Congress, the Army did not provide accelerated fielding estimates for two initiatives. Most estimates were made by the Army before the initiatives were approved and had to be revised because the selection and approval process took too long and funds were not released when planned. According to the latest fielding projections by program officials, six of the nine programs may not save as much time as originally claimed, two may accelerate fielding as originally estimated, and one may actually be ahead of the original fielding estimate (see table 1). Fielding could be postponed further if there are more delays or funding shortfalls. The Army made substantial changes to some WRAP initiatives. These changes prolonged implementation. The Army concluded, for example, that the design of Avenger Slew-to-Cue was deficient and that the technology would become obsolete before it would be fielded. In fiscal year 1997, the Army thus made major changes in the design and acquisition strategy of the program; this led to additional development work and testing. Because of these changes, DOD has been withholding 1998 WRAP funding for the initiative. The Gun Laying and Positioning System also experienced a schedule slippage that will delay fielding. According to the program manager, the slippage will make it necessary to alter funding (for example, by shifting funds from the out-years to underfunded or unfunded years) in order to accelerate fielding. The congressional defense committees were not informed of these developments. The Army is not required to issue progress reports or to notify Congress of changes in ongoing programs. The only formal feedback mechanism is a congressional requirement that the Army submit quarterly funding reports to the Senate Armed Services Committee on the obligation of funds for WRAP initiatives. The Army is also required to provide more frequent reports if WRAP has significant successes or failures. To date, the Army has not submitted any of the required reports. After 2 years, there is growing uncertainty about which technologies should receive top priority for WRAP funding. The Army’s criteria for WRAP candidates are open-ended and do not ensure that initiatives share a common set of characteristics. For example, there is disagreement within the Army over whether WRAP and the fielding of the first digitized division should be directly linked. In the absence of more precise selection criteria, disagreements over which candidates are most appropriate for WRAP funding will likely continue. The Army may find it increasingly difficult to identify candidates that are sufficiently developed in the near future because it has reduced large-scale test and experimentation exercises and will thus have less data with which to assess new WRAP candidates. The Army has not presented its slate of WRAP candidates for congressional approval early enough in the fiscal year to permit timely obligation of funds. This has led directly to delays in fielding because estimates were predicated on earlier availability of funds. Although some technologies may be fielded sooner because of WRAP, in most cases the program will not speed up fielding as much as originally expected. The Army is required to report quarterly on the status of funding obligations to the Senate Armed Services Committee. To date, it has not met this requirement, and there is no other requirement for reporting on program performance or status. We believe that Congress is being asked to make funding decisions without all the information it needs. Information presently not provided on a consistent basis includes program cost, schedule, and performance; planned obligations; any significant changes to program acquisition strategy; and any scheduled changes in program digital battlefield participation. We recommend that the Secretary of Defense direct the Secretary of the Army to issue WRAP guidance that calls for specific deadlines for candidate identification and selection to ensure timely submission of candidates to Congress and timely obligation of funds, minimum testing and experimentation requirements for WRAP candidates, periodic reports to Congress on the status of ongoing WRAP initiatives. Given Congress’ 2 years of experience in reviewing Army requests for WRAP funding of specific technologies and the disagreement within the Army about which technologies are most appropriate for WRAP funding, this may be an appropriate time for Congress to clarify its expectations of the program and to ensure that these expectations are embodied in more precise selection criteria for WRAP candidates. In written comments on a draft of this report, DOD partially concurred with our recommendation, but did not specify why its concurrence was not complete. In its response, DOD stated that the Army is continuing to examine potential improvements. DOD indicated that the Army will provide recommendations for improvements by December 1, 1998, to the Office of the Secretary of Defense Overarching Integrated Product Team leaders as part of the Force XXI WRAP program update. DOD also stated that the Army is continuing to examine potential improvements to the WRAP/Force XXI process, including the schedules for candidate identification and selection, the requirements for levels of testing and experimentation tailored to the specific initiative, and the appropriate detail and frequency of reporting. Since WRAP is now in its third year of implementation, we believe it is time for specific remedies to address the issues that have been identified and believe our recommendation addresses these issues. DOD’s comments are reprinted in their entirety in appendix III. To assess the current status of the program, we reviewed the criteria used to identify, evaluate, and select WRAP candidates. We interviewed both DOD and Army officials responsible for the WRAP. We visited the Office of the Assistant Secretary of the Army for Research, Development, and Acquisition, Washington, D.C.; TRADOC, Fort Monroe, Virginia; and OPTEC, Alexandria, Virginia. We reviewed congressional funding restrictions and selection criteria as well as the Army’s WRAP policy guidelines, Army Systems Acquisition Review Council briefing packages, and resulting administrative decision memorandums. We discussed budget withholdings, assessments, and reprogramming with officials in the DOD Comptroller’s Office and the Office of the Assistant Secretary of the Army for Research, Development, and Acquisition. With Office of the Assistant Secretary of the Army for Research, Development, and Acquisition and TRADOC’s assistance, we examined in detail the WRAP candidate identification, selection, and approval process. We examined how TRADOC identifies and screens candidates and reviewed Office of the Assistant Secretary of the Army for Research, Development, and Acquisition’s congressional briefings and OPTEC’s rating and evaluation process. We also reviewed WRAP-related documentation, including program management and budget documents, congressional hearings and briefings, and AWE assessments. We also attended the Division AWE at Fort Hood, Texas, and observed WRAP initiatives in the field. We reviewed cost, schedule, and performance documentation at WRAP initiative program offices and reviewed program acquisition plans and schedules. We interviewed appropriate officials, received briefings, and reviewed relevant program documents during visits to the Short-Range Air Defense and Aviation Electronic Combat Project Offices, Redstone Arsenal, Huntsville, Alabama; the Simulation, Training, and Instrumentation Command, Orlando, Florida; and the Armament and Chemical Acquisition and Logistics Activity, Rock Island Arsenal, Rock Island, Illinois. We also met with OPTEC officials and reviewed relevant information papers and AWE assessments. We also discussed OPTEC’s initiative rating process, particularly regarding test and experimentation data necessary to support an OPTEC rating. We also discussed how TRADOC and Office of the Assistant Secretary of the Army for Research, Development, and Acquisition officials incorporate ratings in the selection process. We performed our review from September 1997 to October 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to other appropriate congressional committees; the Secretaries of Defense and the Army; and the Director, Office of Management and Budget. Copies will also be made available to others upon request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix IV. Tables I.1 through I.4 show funding for two groups of Warfighting Rapid Acquisition Program (WRAP) initiatives (fiscal years 1997-98 and fiscal years 1998-99) and briefly describe the programs in each group. Army Airborne Command and Control System Combat Synthetic Training Assessment Range Gun Laying and Positioning System Striker (Scout Common Vehicle) Digital battle command information system that provides on-the-move, almost real-time situation awareness to tactical combat, combat support, and combat service support leaders at individual fighting platforms. An on-the-move node that provides corps, division, and brigade commanders mobility and communications interoperability while maintaining sensor-to-shooter connectivity. Provides a digitized sensor-to-shooter link, enabling the squad leader or gunner to designate a target for engagement. Battle command training system that provides collective training for brigade-sized organizations at Fort Irwin, California, and Fort Hood, Texas. In testing, the system provided realistic signal intelligence, unmanned aerial vehicle intelligence/imagery, and joint surveillance target attack radar system intelligence/imagery to the brigade combat team. A tripod-mounted positioning and orienting device consisting of a nondevelopmental item gyroscope, electronic theodolite, position location ground receiver, and a short-range eye-safe laser rangefinder. A man-portable laser designator and target locator with eye-safe range finding, azimuth determination, self-location, and data/image export capability. It can locate targets in day or night with all-weather capability. Integrates mortars into the fire support architecture and provides full field artillery tactical data system compatibility. Consists of a high-mobility multiwheeled vehicle configured as a fire direction center and three subsystems: position navigation, fire control, and situational awareness. This platform is capable of loading and unloading itself and a companion trailer in 5 minutes to allow flexible mission assignment and operation under adverse conditions. It consists of the Palletized Load System platform and the Movement Tracking System (MTS). MTS can identify position, track progress, and communicate with the operators of tactical wheel vehicles. It has global positioning capability, can send base-to-mobile and mobile-to-base messages, and can locate/track an asset’s position using personal computer-based software. Provides asset visibility/in-transit capability to units and managers. The tags are an assemblage of commercial off-the-shelf equipment that store embedded data of container contents, shipments, and vehicle identification. The tags are fixed to containers to track material through the distribution system. Striker ( Scout Common Vehicle) High-mobility, multiwheeled, vehicle used by combat observation lasing teams. The system can self-locate; determine range, azimuth, and vertical angle to a target; designate targets; and enhance day/night observation. It will contain the same Fire Support Team computer mission equipment as the Bradley vehicle. A software enhancement to improve voice-data contention and unit tasking order. Voice-data contention is the ability of the Single Channel Ground and Airborne Radio System radios to synchronize voice and data transmission over the same radio path. Unit tasking order can dynamically task-organize units within the Tactical Internet. Uses a network of computers and communication equipment to provide a joint integrated air picture to battalion, brigade, division, corps, and theater commanders, providing real-time air situational awareness and enhancing air defense-force protection. High-mobility, multiwheeled, vehicle-mounted shelter with digital communication that allows the brigade combat team to integrate, process, and interpret real-time sensor and broadcast reports from remote intelligence data bases via a common ground station and to merge the information with the brigade’s organic reconnaissance. Provides combined arms training for the digitized division’s close combat heavy battalion and units below. Supports the training of mission training plan tasks by the digitized force using all Force XXI C4 I systems. Receives, updates, and disseminates digital terrain data to provide both digital and analog tactical decision aids in support of the commanders’ battlefield visualization process. Heavy contact maintenance vehicle that provides forward area battlefield maintenance to mechanized forces. Automated, worldwide, beyond line-of-sight tracking and messaging system used to inject unit location and limited messaging for nondigitized elements into existing and planned automated C2 systems. Links digitized and nondigitized forces. Provides essential video and high-speed data access through mobile subscriber equipment. Allows users to move voice, video, and data over the existing communication network. These modules merge data from the Unit Level Logistics Ground System, the Unit Level Logistics System, and the Standard Installation/Division Personnel System into a relational data warehouse based on a client-server system. Provides high data rate communications between tactical operation centers at brigade level and below. Provides digitized training for two-way exchange between tactical command and control system work stations and distributed interactive simulations. The Army’s process for identifying, evaluating, and selecting WRAP candidates involves several organizations and a number of steps that lead candidates from initial identification to final presentation by the Army Chief of Staff to Congress. Key to securing timely congressional approval of WRAP candidates is the Army’s ability to finalize its selection early in the budget cycle. It is important that WRAP candidates be processed promptly, since the success of the program depends on the timely development of technologies determined to be urgently needed by the warfighter. Proposals are initially submitted by the using commands to Training and Doctrine Command’s (TRADOC) Battle Lab Board of Directors. Proposals must include (1) a battle lab experiment plan containing an urgency of need statement, test results, an acquisition strategy, and a budget estimate; (2) an operational requirements statement addressing defense planning guidance, threat, system requirements, and constraints; and (3) an information paper addressing technical merit and maturity, criticality, and priority of the warfighting effort, affordability, effectiveness, and budget sustainability. After the Board reviews the proposals, it forwards them to the TRADOC Commanding General, who approves and prioritizes them and forwards them to the Assistant Secretary of the Army for Research, Development, and Acquisition. Further review is then carried out by the Army Systems Acquisition Review Council (ASARC), which is composed of 13 representatives from the Army’s commands, the Office of the Chief of Staff, and the secretariats. The Council is convened by the head of the Acquisition Reform Reinvention Lab (Assistant Secretary of the Army for Research, Development, and Acquisition) on request from the TRADOC Commanding General. In assessing proposals for WRAP funding, the TRADOC Battle Lab Board of Directors ensures that the candidates comply with WRAP criteria. For its part, ASARC examines proposals for urgency of need, requirements, affordability, and experimentation results. When assessing candidates, ASARC relies on information from a number of sources, including the Operational Test and Evaluation Command (OPTEC), which was the lead evaluator of the Task Force XXI Advanced Warfighting Experiment (AWE). OPTEC evaluates candidates and issues its own ratings for consideration by ASARC. The Council reviews the proposals and can recommend approval by the Army Chief of Staff, require further resolution of outstanding issues, or recommend funding from other sources. The Council also approves acquisition and funding strategies and assigns management responsibilities. ASARC forwards its recommendations to the Army Chief of Staff, who presents the final list of candidates for WRAP funding to Congress for approval. Arthur Fine, Evaluator-in-Charge Joseph Rizzo, Jr., Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Army's implementation of the Warfighting Rapid Acquisition Program (WRAP), focusing on the current status of the program. GAO noted that: (1) the Army's criteria for selecting WRAP candidates are open-ended and allow room for different interpretations; (2) as a result, although the Army initially justified WRAP funding on the basis of the need to urgently field technologies associated with the first digitized division, not all WRAP initiatives support the first digitized division; (3) furthermore, some initiatives do not meet all the Army's criteria for WRAP funding; (4) the Army is reducing the testing of new technologies through large-scale warfighting experiments; (5) as a result, the Army may need to change the criteria used to evaluate and rate WRAP candidates; (6) this may affect the quality of future candidates; (7) to date, the Army has not been able to finalize its selection of WRAP candidates early enough to ensure timely approval by Congress; (8) as a result, the final approval of funds and the subsequent start-up of initiatives have been delayed; (9) delays also occurred because the Army did not obtain the timely release of WRAP funds from the Department of Defense (DOD) and because DOD reduced funding for WRAP; (10) in spite of these delays, GAO believes that WRAP funds may still help speed the fielding of some new technologies, though not as much as originally estimated; (11) after initial congressional approval of the first 11 WRAP initiatives, the Army made substantial changes to some of them; (12) these changes affected program implementation; and (13) Congress was not informed of the changes because current reporting requirements do not require the Army to report such changes.
We reported in April 2013 that costs increased and schedules were delayed considerably for all four of VA’s largest medical-facility construction projects, when comparing November 2012 construction project data with the cost and schedule estimates first submitted to Congress. Cost increases ranged from 59 percent to 144 percent,representing a total cost increase of nearly $1.5 billion and an average increase of approximately $366 million per project. The schedule delays ranged from 14 to 74 months with an average delay of 35 months per project. Of these four medical-facility construction projects VA had underway, Denver had the highest cost increase and the longest estimated years to complete. We reported that the estimated cost for the Denver project increased from $328 million in June 2004 to $800 million. VA’s initial estimated completion date for the project was February 2014. Subsequently, VA estimated the project would be completed in May 2015. However, in an update provided to Congress in March 2015, VA did not provide an updated completion date. VA provided an update in April for the total estimated cost and estimated completion date for some of its projects. The data was as of March 2015. Total estimated years to complete 11.25 66 December 2014 February 2016 14 The column titled “total estimated years to complete” is reported to the nearest quarter year and is calculated from the time VA approved the architecture and engineering firm to the current estimated completion date. We calculated the “number of months extended” column by counting the months from the initial estimated completion date to the current estimated completion date, as reported by VA. According to VA, the dates in the initial estimated completion dates are from the initial budget prospectus, which assumed receipt of full construction funding within 1 to 2 years after the budget submission. In some cases, construction funding was phased over several years and the final funding was received several years later. Naval Facilities Engineering Command officials we spoke with told us that historically, medical facility projects take approximately 4 years from design to completion. We calculated the percentage change in cost by using the initial total estimated costs and total estimated costs, as reported by VA. The main medical center was completed in April 2012 and patients began utilizing the facility in August of 2012. However, as of March 2015, the final phase of the Las Vegas project to expand the emergency department is projected to be completed in the summer of 2015. For the purpose of our analysis above, we calculated the number of months extended and the total years to complete using the date of June 2015. However, schedule delays would increase if the project was completed later in the summer of 2015. In its March 2015 update, VA did not provide the total estimated cost for the Orlando project. According to VA’s March 2015 update, the New Orleans project has a construction completion date of February 2016, except for Dixie/Research building which will be completed by late 2016. In commenting on a draft of our April 2013 report, VA stated that using the initial completion date from the construction contract would be more accurate than using the initial completion date provided to Congress; however, using the initial completion date from the construction contract would not account for how VA managed these projects before it awarded the construction contract. Cost estimates at this earlier stage should be as accurate and credible as possible because Congress uses these initial estimates to consider authorizations and make appropriations decisions. We used a similar methodology to estimate changes to cost and schedule of construction projects in a previous report issued in 2009 on VA construction projects. We believe that the methodology we used in our April 2013 and December 2009 reports on VA construction provides an accurate depiction of how cost and schedules for construction projects can change from the time they are first submitted to Congress. It is at this time that expectations are set among stakeholders, including the veterans’ community, for when projects will be completed and at what cost. In our April 2013 report, we made recommendations to VA to help address these cost and schedule delays which are discussed later in this statement. In our April 2013 report, we identified two primary factors that contributed to cost increases and schedule delays at the Denver facility: (1) decisions to change plans from a shared university/VA medical center to a stand- alone VA medical center and (2) unanticipated events. Decision to change plans from a shared university/VA medical center to a stand-alone VA medical center. VA revised its original plans for shared facilities with a local university to stand-alone facilities after proposals for a shared facility could not be finalized. Plans went through numerous changes after the prospectus was first submitted to Congress in 2004. In 1999, VA officials and the University of Colorado Hospital began discussing the possibility of a shared facility on the former Fitzsimons Army base in Aurora, Colorado. Negotiations continued until late 2004, at which time VA decided against a shared facility with the University of Colorado Hospital because of VA concerns over the governance of a shared facility. In 2005, VA selected an architectural and engineering firm for a stand-alone project, but VA officials told us that the firm’s efforts were suspended in 2006 until VA acquired another site at the former Army base adjacent to the new university medical center. Design restarted in 2007 before suspending again in January 2009, when VA reduced the project’s scope because of lack of funding. By this time, the project’s costs had increased by approximately $470 million, and the project’s completion was delayed by 14 months. The cost increases and delays occurred because the costs to construct operating rooms and other specialized sections of the facility were now borne solely by VA, and the change to a stand-alone facility also required extensive redesign. Unanticipated events. VA officials at the Denver project site discovered they needed to eradicate asbestos and replace faulty electrical systems from pre-existing buildings. They also discovered and removed a buried swimming pool and found a mineral-laden underground spring that forced them to continually treat and pump the water from the site, which impacted plans to build an underground parking structure. In our April 2013 report, we found that VA had taken steps to improve its management of major medical-facility construction projects, including creating a construction-management review council. In April 2012, the Secretary of Veterans Affairs established the Construction Review Council to serve as the single point of oversight and performance accountability for the planning, budgeting, executing, and delivering of VA’s real property capital-asset program. The council issued an internal report in November 2012 that contained findings and recommendations that resulted from meetings it held from April to July 2012. The report stated that the challenges identified on a project-by-project basis were not isolated incidents but were indicative of systemic problems facing VA. In our 2013 report we also found that VA had taken steps to implement a new project delivery method—called the Integrated Design and Construction (IDC) method. In response to the construction industry’s concerns that VA and other federal agencies did not involve the construction contractor early in the design process, VA and the Army Corps of Engineers began working to establish a project delivery model that would allow for earlier contractor involvement in a construction project, as is often done in the private sector. We found in 2013 that VA did not implement IDC early enough in Denver to garner the full benefits. VA officials explained that Denver was initiated as a design-bid-build project and later switched to IDC after the project had already begun. According to VA officials, the IDC method was very popular with industry, and VA wanted to see if this approach would effectively deliver a timely medical facility project. Thus, while the intent of the IDC method is to involve both the project contractor and architectural and engineering firm early in the process to ensure a well coordinated effort in designing and planning a project, VA did not hire the contractor for Denver until after the initial designs were completed. According to VA, because the contractor was not involved in the design of the projects and formulated its bids based on a design that had not been finalized, these projects required changes that increased costs and led to schedule delays. VA staff responsible for managing the project said it would have been better to maintain the design-bid-build model throughout the entire process rather than changing mid-project because VA did not receive the value of having the contractor’s input at the design phase, as the IDC method is supposed to provide. For example, according to Denver VA officials, the architectural design called for curved walls rather than less expensive straight walls along the hospital’s main corridor. The officials said that had the contractor been involved in the design process, the contractor could have helped VA weigh the aesthetic advantages of curved walls against the lower cost of straight walls. Since our April 2013 report was issued, in 2014, the United States Civilian Board of Contract Appeals found that VA materially breached the construction contract with the construction contractor by failing to provide a design that could be built for the contracted amount of $582.8 million. In its decision, one of the Board’s findings was that VA did not use the IDC design mechanism properly from the start. The Board noted that when the construction contractor was brought into the project, the architectural engineering design team had been under contract with VA since 2006 and that by 2010, the design was 50 percent complete and funding decisions had already been made. According to the Board, this limited VA’s flexibility to make modifications based on the construction contractor’s pre-construction advice. The Board also noted a September 2011 review by the Army Corps of Engineers, commissioned by VA, found that the IDC contract type may not have been appropriate for the Medical Center Replacement in Denver. In that review, the Army Corps of Engineers explained that proceedings from design development to major design milestones prior to the procurement of the IDC contractor did not permit the contractor to integrate with the designer to achieve the benefits related to this contract type. The Army Corps of Engineers concluded that the current methodology appeared to be counterintuitive to the government’s ability to achieve best value. In our April 2013 report we identified systemic reasons that contributed to overall schedule delays and cost increases, and recommended that VA take actions to improve its construction management of major medical facilities: including (1) developing guidance on the use of medical equipment planners; (2) sharing information on the roles and responsibilities of VA construction project management staff; and (3) streamlining the change order process.aimed at addressing issues we identified at one or more of the four sites we visited during our review. VA has implemented our recommendations; however, the impact of these actions may take time to reflect improvements, especially for ongoing construction projects, depending on several issues, including the relationship between VA and the contractor. Since completing our April 2013 report, we have not reviewed the extent Our recommendations were to which these actions have affected the four projects, or the extent to which these actions may have helped to avoid the cost overruns and delays that occurred on each specific project. On August 30, 2013, VA issued a policy memorandum providing guidance on the assignment of medical equipment planners to major medical construction projects. The memorandum states that all VA major construction projects involving the procurement of medical equipment to be installed in the construction will retain the services of a Medical Equipment Specialist to be procured through the project’s architectural engineering firm. Prior to issuance of this memorandum, VA officials had emphasized that they needed the flexibility to change their heath care processes in response to new technologies, equipment, and advances in medicine.Given the complexity and sometimes rapidly evolving nature of medical technology, many health care organizations employ medical equipment planners to help match the medical equipment needed in the facility to the construction of the facility. Federal and private sector stakeholders reported that medical equipment planners have helped avoid schedule delays. VA officials told us that they sometimes hire a medical equipment planner as part of the architectural and engineering firm’s services to address medical equipment planning. However, in our April 2013 report we found that for costly and complex facilities, VA did not have guidance for how to involve medical equipment planners during each construction stage of a major hospital and has sometimes relied on local Veterans Health Administration (VHA) staff with limited experience in procuring medical equipment to make medical equipment planning decisions. Thus, we recommended that the Secretary of VA develop and implement agency guidance to assign medical equipment planners to major medical construction projects. As mentioned earlier, in August 2013, VA issued such guidance. In September 2013, in response to our recommendation, VA put procedures in place to communicate to contractors the roles and responsibilities of VA officials who manage major medical facility construction projects, including the change order process. Among these procedures is a Project Management Plan that requires the creation of a communications plan and matrix to assure clear and consistent communications with all parties. Construction of large medical facilities involves numerous staff from multiple VA organizations. Officials from the Office of Construction and Facilities Management (CFM) stated that during the construction process, effective communication is essential and must be continuous and involve an open exchange of information among VA staff and other key stakeholders. However, in our April 2013 report, we found that the roles and responsibilities of CFM and VHA staff were not always well communicated and that it was not always clear to general contracting firms which VA officials hold the authority for making construction decisions. This lack of clarity can cause confusion for contractors and architectural and engineering firms, ultimately affecting the relationship between VA and the general contractor. Participants from VA’s 2011 industry forum also reported that VA roles and responsibilities for contracting officials were not always clear and made several recommendations to VA to address this issue. Therefore, in our 2013 report, we recommended that VA develop and disseminate procedures for communicating—to contractors—clearly defined roles and responsibilities of the VA officials who manage major medical-facility projects, particularly those in the change-order process. As discussed earlier in this statement, VA disseminated such procedures in September 2013. On August 29, 2013, VA issued a handbook for construction contract modification (change-order) processing which includes milestones for completing processing of modifications based on their dollar value. In addition, as of September 2013, VA had also hired four additional attorneys and assigned on-site contracting officers to the New Orleans, Denver, Orlando, Manhattan and Palo Alto major construction projects to expedite the processing and review of construction contract modifications. By taking steps to streamline the change order process, VA can better ensure that change orders are approved in a prompt manner to avoid project delays. Most construction projects require, to varying degrees, changes to the facility design as the project progresses, and organizations typically have a process to initiate and implement these changes through change orders. Federal regulations and agency guidance state that change orders must be made promptly, and agency guidance states in addition that there be sufficient time allotted for the government and contractor to agree on an equitable contract adjustment. VA officials at the sites we visited as part of our April 2013 review, including Denver, stated that change orders that take more than a month from when they are initiated to when they are approved can result in schedule delays, and officials at two federal agencies that also construct large medical projects told us that it should not take more than a few weeks to a month to issue most change orders.involved in VA and contractors’ coming to agreement on the costs of changes and the multiple levels of review required for many of VA’s change orders. As discussed earlier, VA has taken steps to streamline the change order process to ensure that change orders are approved in a prompt manner to avoid project delays. Processing delays may be caused by the difficulty Chairman Isakson, Ranking Member Blumenthal, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions about this testimony, please contact Mark L. Goldstein at 202-512-2834 or [email protected]. Other key contributors to this testimony include Ed Laughlin (Assistant Director), Nelsie Alcoser, George Depaoli, Raymond Griffith, Hannah Laufe, SaraAnn Moessbauer, and Michael Clements. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
VA operates one of the nation's largest health care delivery systems. In April 2013, GAO reported that VA was managing the construction of 50 major medical-facility projects costing between $10 million and hundreds of millions of dollars, including the ongoing project in Denver. This statement discusses VA construction management issues, specifically, (1) the extent to which the cost, schedule, and scope at Denver and other major medical-facility projects has changed and the reasons for these changes, (2) actions VA has taken since 2012 to improve its construction management practices, and (3) VA's response to GAO's recommendations for further improvements in its management of these construction projects. This statement is based on GAO's April 2013 report ( GAO-13-302 ), May 2013 ( GAO-13-556T ), April 2014 ( GAO-14-548T ), and January 2015 ( GAO-15-332T ) testimonies, and selected updates on VA projects—located in Denver, Colorado; Las Vegas, Nevada; New Orleans, Louisiana; and Orlando, Florida. To conduct these updates, GAO obtained documentation from VA in April 2015. In April 2013, GAO found that costs substantially increased and schedules were delayed for Department of Veterans Affairs' (VA) largest medical-facility construction projects, located in Denver, Colorado; Las Vegas, Nevada; New Orleans, Louisiana; and Orlando, Florida. In comparison with initial estimates, the cost increases for these projects now range from 66 percent to 427 percent and delays range from 14 to 86 months. Since the 2013 report, some of the projects have experienced further cost increases and delays because of design issues. For example, as of April 2015, the cost for the Denver project increased by nearly $930 million, and the completion date for this project is unknown. In its April 2013 report, GAO found that VA had taken some actions since 2012 to address problems managing major construction projects. Specifically, VA established a Construction Review Council in April 2012 to oversee the department's development and execution of its real property programs. VA also took steps to implement a new project delivery method, called Integrated Design and Construction, which involves the construction contractor early in the design process to identify any potential problems early and speed the construction process. However, in Denver, VA did not implement this method early enough to garner the full benefits of having a contractor early in the design phase. VA has taken actions to implement the recommendations in GAO's April 2013 report. In that report, GAO identified systemic reasons that contributed to overall schedule delays and cost increases at one or more of four reviewed projects and recommended ways VA could improve its management of the construction of major medical facilities. In response, VA has issued guidance on assigning medical equipment planners to major medical facility projects who will be responsible for matching the equipment needed for the facility in order to avoid late design changes leading to cost increases and delays; developed and disseminated procedures for communicating to contractors clearly defined roles and responsibilities of the VA officials who manage major medical-facility projects to avoid confusion that can affect the relationship between VA and the contractor; and issued a handbook for construction contract modification (change-order) processing that includes milestones for completing processing of modifications based on their dollar value and took other actions to streamline the change order process to avoid project delays. While VA has implemented GAO's recommendations, the impact of these actions may take time to show improvements, especially for ongoing construction projects, depending on several issues, including the relationship between VA and the contractor. In its April 2013 report, GAO recommended that VA (1) develop and implement agency guidance for assignment of medical equipment planners; (2) develop and disseminate procedures for communicating to contractors clearly defined roles and responsibilities of VA officials; (3) issue and take steps to implement guidance on streamlining the change-order process. VA implemented GAO's recommendations.
SIPC’s mission is to promote confidence in securities markets by seeking to return customers’ cash and securities when a broker-dealer fails. SIPC provides advances for these customers up to the SIPA protection limits— $500,000 per customer, except that claims for cash are limited to $250,000 per customer. SIPC is governed by a seven-member board of directors. Its membership is, generally, brokers or dealers registered under section 15(b) of the Securities and Exchange Act of 1934. Membership is mandatory for all registered broker-dealers that do not meet one of the limited statutory exemptions.SIPC had 4,773 members. As of December 31, 2010, While SIPC is not a federal agency, it is subject to federal oversight. Under SIPA, SEC exercises what the U.S. Supreme Court has recognized as “plenary,” or general, supervisory authority over SIPC. Specifically, SIPC bylaws and rules are subject to SEC review. SEC also may require SIPC to adopt, amend, or repeal any bylaw or rule. In addition, SEC can participate as a party in any judicial proceeding under SIPA and can file an application in the U.S. District Court for the District of Columbia for an order compelling SIPC to carry out its statutory obligations. Further, SIPA authorizes SEC to conduct inspections and examinations of SIPC, and requires SIPC to furnish SEC with reports and records that it believes are necessary or appropriate in the public interest or to fulfill the purposes of SIPA. All seven members of SIPC’s board of directors are appointed by federal officials: one is appointed by the Secretary of the Treasury and one by the Federal Reserve Board, from among the officers and employees of those agencies, and five are appointed by the President, subject to Senate confirmation. SIPA established a fund (SIPC fund) to pay for SIPC’s operations and activities. SIPC uses the fund to make advances to satisfy customer claims for missing cash and securities, including notes, stocks, bonds, and certificates of deposit. The SIPC fund also covers the administrative expenses of a liquidation proceeding when the general estate of the failed firm is insufficient; these include costs incurred by a trustee, trustee’s counsel, and other advisors. SIPC finances the fund through annual assessments, set by SIPC, on all member firms, plus interest generated from its investments in Department of the Treasury (Treasury) notes. If the SIPC fund becomes, or appears to be, insufficient to carry out the purposes of SIPA, SIPC can borrow up to $2.5 billion from the Treasury through SEC, whereby SEC would borrow the funds from the Treasury and relend them to SIPC. Figure 1 shows the SIPC fund’s balance over the past decade, with the balance falling after the 2008 financial crisis and beginning to recover in 2010. According to SIPC senior management, recent demands on the fund, including from the Madoff case, coupled with a change in SIPC bylaws increasing the target size of the fund from $1 billion to $2.5 billion, led SIPC to impose new industry assessments that total about $400 million annually. The assessments, equal to one-quarter of 1 percent of net operating revenue, will continue until the $2.5 billion target is reached, according to SIPC senior management. The new assessments replaced a Under the new levies, the flat $150 annual assessment per member firm.average assessment for 2010 was $91,755 per firm, with a median of $2,095, according to SIPC. See appendix II for a history of assessments and assessment rates for the SIPC fund. SIPA authorizes SIPC to begin a liquidation action by applying for a protective order from an appropriate federal district court if it determines that one of its member broker-dealers has failed or is in danger of failing to meet its obligations to customers and one or more additional statutory conditions are met. The broker-dealer has an opportunity to contest the protective order application. If the court issues the order, the court appoints a “disinterested” trustee selected by SIPC, or, in certain cases, Under SIPA, SIPC has sole discretion SIPC itself, to liquidate the firm.to select a trustee and trustee’s counsel for the liquidation of a member broker-dealer firm. SEC has no statutory role in the selection of the trustee or trustee’s counsel. SIPC attempts to match the size of the engagement with the capabilities of service providers. If SIPC were not to act immediately, SEC could opt to seek court appointment of an SEC receiver, pending SIPC action, according to SIPC senior management. After SIPC makes its selection and the trustee is appointed, the bankruptcy court holds a disinterestedness hearing, at which interested parties can object to the selected individual and firm named as counsel. The district court also orders removal of the liquidation proceeding to the federal bankruptcy court for that district. To the extent that it is consistent with SIPA, the proceeding is conducted pursuant to provisions of the Bankruptcy Code. While SIPC designates the trustee, that person, once judicially appointed, becomes an officer of the court. As such, the trustee exercises independent judgment and does not serve as an agent of SIPC. Indeed, SIPC- designated trustees and SIPC have occasionally taken opposing legal positions in liquidation proceedings. Under SIPA, the trustee must investigate facts and circumstances relating to the liquidation; report to the court facts indicating fraud, misconduct, mismanagement, or irregularities; and submit a final report to SIPC and others designated by the court. Also, the trustee is to periodically report to the court and SIPC on his or her progress in distributing cash and securities to customers. The bankruptcy court is to grant the trustee and trustee’s counsel “reasonable compensation” for services rendered and reimbursement for proper costs and expenses incurred in connection with the liquidation proceeding. Promptly after being appointed, the trustee must publish a notice of the proceeding in one or more major newspapers, in a form and manner determined by the court. The trustee also must see that a copy of the notice is mailed to existing and recent customers listed on the broker- dealer’s books and records, and provide notice to creditors in the manner the Bankruptcy Code prescribes. Customers must file written statements of claims. The notice typically informs customers how to file claims and explains deadlines. Two deadlines apply. One is set by the bankruptcy court supervising the proceeding, and the other by SIPA. The bankruptcy court deadline for filing customer claims applies to customer claims for net equity and may not exceed 60 days after the date that notice of the proceeding is published. Failure to meet the deadline can affect whether a customer claim is satisfied with securities or cash in lieu of securities. The SIPA deadline occurs 6 months after the publication date. SIPA mandates that the trustee cannot allow any customer or general creditor claim received after the 6-month deadline, except claims filed by the United States, any state or local government, or certain infants and incompetent persons (although a request for an extension must be filed before the 6-month period has lapsed). Once filed, claims undergo various review, according to the Trustee. First, the Trustee’s claims agent reviews claims for completeness; if information is found to be missing, the claims agent sends a request for additional information. Next, the Trustee’s forensic accountants review each claim form, information gathered from the Madoff firm’s records regarding the account at issue, and information submitted directly by the claimant. The Trustee uses the results of this review in assessing his determination of the claim. Finally, claims move to SIPC, where a claims review specialist provides a recommendation to the Trustee on how each claim should be determined. Once that recommendation has been made, the Trustee and trustee’s counsel review it, as well as legal or other issues raised previously. When the Trustee has decided on resolution of a claim, he issues a determination letter to the claimant. The letter also informs claimants of their right to object to the determination and how to do so. The bankruptcy court judge overseeing the liquidation rules on a customer’s objections after holding a hearing on the matter. Decisions of the bankruptcy court may be appealed to the appropriate federal district court, and then upward through the federal appellate process. As of January 27, 2012, the Trustee had received 16,519 customer claims in the Madoff proceeding, and reached determinations on all but two of them. Figure 2 shows a timeline of key events in the Madoff liquidation. A SIPC liquidation of a member broker-dealer begins when either SEC or a securities self-regulatory organization, such as the Financial Industry Regulatory Authority, recommends that a firm’s failure may require SIPC assistance, usually because of theft or other misuse of customer assets and insolvency. If SIPC’s president, general counsel, and vice president for operations agree that a case should be opened, the SIPC president requests authority from the SIPC board chair to begin the action. Upon receiving this authority, the SIPC president selects a trustee and trustee’s counsel after consultation within SIPC. According to SIPC senior management, the SIPC board does not vote on the selections. Instead, the selection relies on the judgment of SIPC senior management in what they describe as a relatively narrow field of specialty. SIPC senior management told us they attempt to match the size of the liquidation proceeding with the capabilities of the individuals and firms that will perform the liquidation. Typically in SIPC cases, the firm selected to act as the trustee’s counsel is the same law firm of which the trustee is a member, and the statute explicitly permits this. According to SIPC senior management, having a trustee from the same law firm increases efficiency and cuts costs, as it provides better communication and allows the trustee to make better use of legal resources. To assist in selection of a trustee or trustee’s counsel, SIPC maintains a file of candidates from across the country, which contains information such as professional experience and billing rates, and it subscribes to an information service that provides background information and ratings on lawyers and law firms, and identifies areas of specialization. SIPC informally assembles its roster from multiple sources, including inquiries from firms interested in SIPC business and SIPC’s experience with firms it encounters in legal proceedings. Where SIPC is unfamiliar with local practitioners, it will seek recommendations from SEC staff and local judges. Among firms new to its roster, SIPC seeks to build their experience by using them as trustee’s counsel in relatively small cases in which SIPC itself acts as trustee, or by having them serve as counsel in matters in which the SIPA trustee or trustee’s counsel discover during an investigation a previously unknown conflict of interest, according to SIPC senior management. At the conclusion of a case, SIPC senior management prepares a legal and accounting evaluation of service providers used. Included in this evaluation is a recommendation whether to use the service provider again. If SIPC staff recommends against a provider, SIPC senior management told us, the provider is less likely to be selected in the future. We sought to review such evaluations, but SIPC senior management declined to provide them to us on the grounds they cover privileged attorney work-product information. According to SIPC senior management, the selection of the Madoff trustee followed these past practices. Specifically, according to senior management, the SIPC President received a call from SEC on December 11, 2008, advising him that Madoff had just turned himself in to law enforcement and had admitted to a massive fraud at his firm. Because of the likely size and complexity of this case, SIPC senior management told us that selecting an experienced attorney to act as trustee would be important, which limited the field of potential trustees. Upon learning of the failure of the Madoff firm, SIPC senior management used their experience and judgment to initially identify four potential trustees from their pool of candidates, including Mr. Picard. The three others were a former New York municipal finance official, who was a lawyer and accountant but had not done a SIPC case and was not a member of a law firm; an experienced liquidation attorney who was already busy with another large financial firm failure; and another candidate from a large New York law firm with extensive bankruptcy experience, but that law firm had a disqualifying conflict of interest. Because of the situations of the other candidates, SIPC contacted Mr. Picard on the morning of December 11, 2008, and asked him to serve as trustee for the Madoff liquidation. As described later, the law firm Mr. Picard would soon join, Baker & Hostetler LLP (Baker Hostetler), was named as the trustee’s counsel. Similarly, SIPC senior management told us that SIPC followed a similar process in the recent large failure of MF Global, Inc., in which they contacted 5 candidates, drawn from an initial field of about 10, before the selection was made. Although SIPC senior management said the process in selecting the Madoff trustee followed past selection practices, such practices are not documented. According to SIPC senior management, current SIPC policies do not document the decision process and any criteria applied in making selections because senior managers rely on their judgment and familiarity with individuals with appropriate experience. Further, they noted they must act quickly to get a trustee in place for a failed firm as soon as possible, because broker-dealer firms often fail with little advance warning. Moreover, they said that getting a trustee in place quickly to take over operations of the firm is essential to preserving assets and maximizing returns to customers. However, federal and private sector standards for internal control recommend that an entity document its system of internal controls, by such means as management directives, policies, operating instructions, and written manuals.criteria would allow SIPC’s oversight agency, SEC, to more effectively assess whether SIPC follows consistent practices in selecting trustees, as well as increase the transparency of SIPC’s decisionmaking. SEC officials told us that having SIPC better document its selection process would improve SEC’s ability to oversee SIPC activities, in such areas as determining the extent to which SIPC considered the fees charged by trustees or how it addressed potential conflict-of-interest situations. SEC officials told us they plan to discuss better documenting the trustee selection process and criteria with SIPC. In the case of trustee selection, documented policies and SIPC also has not documented its outreach process for identifying potential candidates to serve as trustees. SIPC senior management told us they do not make formal efforts to expand the trustee candidate roster, such as by regularly or systematically identifying or approaching other parties. They said they view such efforts as unnecessary or impractical because the number of attorneys who conduct work relevant to broker- dealer bankruptcies is small enough that SIPC is already is aware of most of them, or the attorneys already are familiar with SIPC. Moreover, according to SIPC senior management, actively soliciting candidates could be burdensome for SIPC, by producing too much information about too many firms that can quickly become outdated. They told us such an undertaking would duplicate information already available through its information service subscription, and that because SIPA liquidations can be infrequent and in more remote areas of the country, it is more efficient to obtain current information on qualified firms through the information service and the firms’ websites. However, undertaking additional efforts to more systematically identify other candidates, and to document this process, could help ensure that the range of choices, which SIPC senior management acknowledges is currently limited to a small group with the requisite skills, reflects the widest capabilities available. Access to a potentially wider pool of candidates could help ensure that SIPC is better equipped to meet its responsibilities. SEC officials told us that SIPC’s goal is to use individuals and law firms capable of high-quality work, to avoid potentially damaging legal decisions that could hinder SIPC in future liquidations. Having a documented, formal outreach process would allow SEC to better assess whether SIPC’s outreach efforts are sufficient for ensuring that SIPC is identifying the optimal pool of candidates. SEC officials told us they likely would discuss with SIPC senior management whether its roster of candidates is sufficiently broad, as a wider pool could preserve quality while offering the opportunity for lowering costs. The trustee that SIPC selected for the Madoff liquidation has considerable industry and broker-dealer liquidation experience. He served as the first U.S. Trustee for the Southern District of New York, where his duties included appointing and supervising trustees who administer consumer debtors’ bankruptcy estates and corporate reorganization cases, and who litigate bankruptcy related matters. He appointed the trustee for reorganization of O.P.M. Leasing Services, Inc., a several-hundred- million-dollar Ponzi scheme case involving nonexistent computer equipment leases. He was on the staff of the SEC for about 8 years, where he was involved with corporate reorganization cases and also served as an assistant general counsel. In private practice, he was appointed the receiver in connection with an SEC injunction action against David Peter Bloom, a Ponzi scheme case involving investor cash losses of about $13 million. Additionally, he has been a trustee in 10 other SIPC cases beginning in 1984, although these cases were much smaller than the Madoff case, which is, by some measures, SIPC’s largest case ever.case as trustee, Mr. Picard said SIPC contacted him and asked whether he would take the position. Subsequently, Mr. Picard said he has indicated to SIPC his continuing interest over the years in serving as a trustee, but did not solicit particular cases. Table 1 summarizes the Trustee’s previous SIPC cases. In valuing customer claims filed as part of the Madoff liquidation, the Trustee selected NIM, which determines the amounts that customers are owed as the amounts they invested less amounts withdrawn. The Trustee, supported at the outset of the case by SIPC and, after nearly a year of analysis, by SEC as well, decided against valuing claims based on amounts shown on customers’ final statements. The parties said this was on the grounds that it met statutory requirements, and that using statement amounts would effectively sanction the Madoff fraud by establishing claims according to the fictitious profits Madoff reported. NIM has consistently been used in SIPC liquidations involving Ponzi schemes, and the two courts that have considered the net equity issue in the Madoff case—the bankruptcy court and the U.S. Court of Appeals for the Second Circuit—have affirmed the Trustee’s decision on this method for determining customer claims. In a SIPA liquidation, it is the trustee that decides on the method for determining customer claims. SIPA refers to this as calculating a customer’s “net equity,” and the statute generally provides that this amount is what would have been owed to the customer if the broker- dealer had liquidated all their “securities positions,” less any obligations of the customer to the firm. The statute also provides that the trustee shall make payments to customers “insofar as such obligations are ascertainable from the books and records of the debtor or are otherwise established to the satisfaction of the trustee.” In SIPA liquidations not involving fraud, trustees typically determine that the amounts owed to customers match the amounts shown on their final statements—that is, the “final statement method” (FSM). In particular, according to SEC officials, in most SIPA liquidations, the books and records of the broker-dealer match the amounts shown on customers’ final statements. In many cases in which a broker-dealer fails, customer accounts are transferred to another broker-dealer firm. However, in cases involving fraud, amounts in customer accounts may not correspond to statement amounts—as in the Madoff case—and SIPA does not have any particular provisions for fraud cases beyond its general terms. The Trustee told us that soon after the case began, and once he realized the investment advisory unit of the Madoff firm was a Ponzi scheme, he concluded that NIM—also known as “money-in/money-out”—was appropriate. As noted earlier, this method determines customer net equity as customer deposits less customer withdrawals; it does not rely upon holdings reported on customers’ final statements. Under NIM, Madoff claimants are divided into two categories: “net winners,” who have withdrawn more than the amount they invested with the Madoff firm, and “net losers,” who have withdrawn less than they invested. Following the firm’s closure, the Trustee received 16,519 claims and denied most of them, chiefly because customers did not have accounts with the Madoff firm. The Trustee said the firm had 4,905 active accounts at the time of closure. Determination of claim amounts under NIM resulted in 2,356 net loser accounts and 2,459 net winner accounts. According to the Trustee, the chief reason for rejecting FSM in favor of NIM was that adopting customer statement amounts as the basis for account values would legitimize Madoff’s fraud and cause account values to hinge on the fictitious trading and returns that Madoff reported to investors. The Trustee took the position that customer statements did not show “securities positions” that could be used for the net equity determination, because the statements were fictitious. Instead, the only Madoff records that reflected reality were those detailing the cash deposits and withdrawals of customers. Thus, the Trustee asserted that he was required to determine net equity based on these records, because they provided the only obligations that could be ascertained and established from the firm’s books and records. The Trustee also said that NIM was the most equitable method for Madoff customers. According to the Trustee, using FSM would allow some customers to retain fictitious “profits” they had withdrawn that actually were misappropriated investments of other customers. Moreover, FSM would divert the limited customer assets available from the liquidation by paying these fictitious profits at the expense of reimbursing real losses. The Trustee also said FSM could conflict with his obligation to recover through clawback actions fictitious profits that Madoff paid to some investors.money would be available to return to customers. The Trustee told us that he is not aware of any Ponzi case in which FSM was used to value customer claims. If the Trustee were less able to make such recoveries, less We also found that the Trustee’s selection of NIM was consistent with use of NIM in previous SIPA liquidations involving Ponzi schemes. According to SIPC data, among seven Ponzi scheme cases since 1995, including the Madoff case, all used NIM, in whole or in part, depending on facts and circumstances of individual accounts. (See table 2.) Although the Trustee decided to use NIM to value Madoff customer claims, he also chose to recognize a portion of customer statement amounts—specifically, those dated before April 1, 1981. The Trustee told us this decision was due to gaps in available Madoff or third party records prior to that date, and that beginning with April 1, 1981, more complete and reliable records became available. The Trustee said he chose to recognize these older customer statement amounts in an attempt to favor customer interests, even though the amounts likely reflect some fictitious profits. The impact of this decision, however, is relatively minor, according to the Trustee—recognizing about $165 million in 371 accounts, equal to about 1 percent of total claims allowed and about 15 percent of total accounts with approved claims. Questions have been raised whether the effect on the SIPC fund influenced selection of the net equity method, as acceptance of higher customer claims under FSM could have affected SIPC’s liability under the coverage it provides to investors. However, the Trustee told us that effect on the SIPC fund did not enter into his selection, and that he did not discuss how the use of NIM would affect the fund with either SIPC or SEC. Like the Trustee, SIPC quickly concluded that NIM was the appropriate method for determining customer claims, because of the fraud in the case and because using FSM would effectively sanction Madoff’s activities. According to SIPC senior management, the focus in a net equity determination is on individual customer transactions—that is, officials do not consider at the case level which method might be best. In the Madoff case, the transactions were alike—fictitious. As a result, applying a single method of determining net equity to the entire Madoff case was appropriate. Furthermore, while trading and reported investment profits were fictitious, records were available on individual customer deposits and withdrawals. Such records make NIM calculations possible, according to SIPC. SIPC senior management emphasized that final customer account statements are not the only “books and records” of the failed firm, as cited in the statute. SIPC senior management told us that when the Madoff case began, they quickly began discussions aimed at producing agreement among SIPC, the Trustee, and SEC on the method for determining net equity. According to SIPC, such agreement was important in order to avoid a situation that had arisen in a previous case in which SEC took a position in court at odds with SIPC. Further, SIPC senior management said they wanted to reach consensus early in the liquidation out of concern that SEC would come under pressure to change its position as the extent of customer losses became clearer. By February 2009, SIPC senior management believed that based on their discussions, they had achieved consensus with SEC on use of NIM. These discussions included a meeting with the SEC Chairman, who, according to SIPC, reported that a majority of commissioners supported NIM. SIPC senior management noted that NIM has unpleasant consequences in some cases, but that honoring final statements would mean others would receive less than the amount of their own contributions. Further, adopting FSM would have put at risk a large majority of asset recoveries the Trustee has secured, SIPC senior management told us, because some funds withdrawn by customers that otherwise could be subject to recovery actions under NIM would instead be recognized as legitimate under FSM and thus not subject to recovery. Although initially agreeing on use of NIM, SEC staff continued to research other options in a process that would extend until November 2009. SEC officials told us their preliminary view in the early days of the case was that NIM appeared to be the only feasible alternative, because it was the most consistent with the statute and fraud law related to Ponzi schemes. However, they said there was no official SEC position at the time. SEC’s continued examination was of great concern to SIPC, according to SIPC senior management, who told us they saw the continuing analysis as a reversal of the earlier support for NIM. SIPC also said that SEC’s continuing analysis raised concerns because SIPC needed certainty on method for valuing claims in order to begin processing and paying them. SEC officials told us they agreed it was important to settle on a method as quickly as possible, but that early in the case, a considerable amount of research remained necessary to formulate a recommendation for the commission’s consideration. They said SEC’s task was not to simply review SIPC’s determination, but rather to examine the issue independently. With SIPC under considerable pressure to start making payments to Madoff customers, SEC’s position was that the Trustee had to do what he thought was correct. If SEC came to a different view later, and the Trustee or the bankruptcy court determined changes needed to be made, claims payments would have to be adjusted as necessary. In a SIPA liquidation, SEC seeks to provide the maximum recovery possible under the law for former customers, according to SEC officials. Toward that end, in addition to NIM and FSM, SEC staff considered several net equity methods as part of their review: NIM plus an adjustment based on Treasury notes. The adjustment would apply an interest rate based on the yield of 13-week Treasury notes for periods in which Madoff customer statements indicated customer holdings were not in securities. NIM plus an alternative adjustment based on Treasury notes. Under this alternative, the adjustment would be made on the assumption customers had been fully invested in 13-week Treasury notes for the life of their account. This revision was in recognition that positions reported on Madoff statements were fabricated. A combination of FSM and NIM, under which FSM would be used to pay claims against the SIPC fund up to the maximum protection of $500,000, and NIM would be used for claims against assets recovered by the trustee. NIM plus an adjustment for inflation (described more fully later in this report). During their review, SEC officials met with outside parties who advocated for FSM. These outside parties advanced arguments including that the Trustee’s view of net equity was at odds with the statute and its legislative history and purpose. In a letter to SEC, several law firms noted that the typical Madoff customer received written trade confirmations and monthly statements, which they said are the basis for determining net equity under the statute. Further, they said the legislative history shows that Congress intended customers to have valid net equity claims even when securities reflected on their confirmations and account statements were never purchased. The outside parties also argued that the Trustee’s position would erode investor confidence at a time—during the financial crisis— when markets and the securities industry could least afford it. They asked that SEC attempt to persuade the Trustee to reverse course, or if that was unsuccessful, seek a court order to that effect. SEC officials characterized the meetings as an opportunity to listen and ask questions. They said they did not make any decisions based solely on information presented in these meetings, and that in general, the outside interests did not advance any new arguments. The clients of the law firms were undisclosed, but according to SIPC senior management, the parties represented were Madoff customers subject to large clawback actions. The SEC Inspector General told us that he does not believe there were any improper motivations in the lobbying by the outside groups, but that such meetings can create appearance problems because other parties, perhaps those with fewer resources and which SEC did not hear, might have had a different position. SEC officials told us they were open to meeting with any parties and did not turn down any requests to meet during this time. Over the course of 2009, SEC staff conducted various analyses of past cases and alternative approaches for valuing customer claims. After receiving various memorandums and briefings, SEC commissioners voted in November 2009 to approve the staff’s request to submit a brief to the bankruptcy court supporting the Trustee’s use of NIM. As one commissioner said at the time, given the difficult situation it faced, the commission did all that it could do legally and equitably in opting for NIM. Both SIPC senior management and SEC officials agreed with the Trustee that the effect on the SIPC fund played no role in the selection of NIM. Both said their approach was to make their best determination under the statute, without regard to cost. They told us they considered any impact on the fund only to identify what actions would be necessary for SEC to extend a loan to SIPC, to be funded by SEC borrowing from Treasury, should that be necessary to supplement fund balances to honor coverage commitments. Further, even if FSM had been selected, the SIPC fund would not have become insolvent, SIPC senior management told us. Under FSM, based on the SIPC coverage limit of $500,000 per customer, the SIPC fund’s maximum exposure would have been $2.1 billion, compared to an expected $889 million outlay under NIM. The use of NIM, rather than relying on final statement amounts, makes determination of customer net equity a more expensive process, SIPC senior management and SEC officials told us. But as with impact on the SIPC fund, they said that cost does not factor into selection of method. Instead, SIPC senior management told us, the higher expenses are necessary, because of the investigation required after Madoff’s statements to customers were found to be fabricated. In any case, use of FSM would not have avoided substantial administrative costs, according to SIPC senior management. Such costs would still have totaled several hundred million dollars, they said, to conduct the liquidation, pursue recovery actions, and process claims. After the Trustee chose NIM and began to settle claims based on the net investments that Madoff customers had made to their accounts, a number of customers objected to this approach. As a result, the Trustee petitioned the bankruptcy court in August 2009 for proceedings to affirm his choice of NIM. Opposing claimants argued that the Trustee must use FSM because Madoff statements reflected securities positions that they had every reason to believe were accurate and upon which they had relied. They emphasized SIPA’s purpose of reinforcing investor confidence and cited the act’s legislative history as indicating that securities positions set forth in broker-dealer statements need not be accurate to be covered under SIPA. The opposing claimants further argued that Madoff’s profits, while fictitious, may have been received and spent years ago, that customers paid taxes on them, and may have foregone other investment opportunities in reliance on investment results shown in their statements. They further maintained that, at least in the case of advances from the SIPC fund, use of FSM would not limit payments to reimburse net losers for their losses. This was because they viewed the SIPC fund as a source for paying customer claims that operated independently of any customer assets recovered by the Trustee. Thus, they claimed all customers, both net winners and losers, could receive up to $500,000 from the SIPC fund without affecting customer assets recovered during the liquidation. Both sides contended that precedent dealing with SIPA liquidations involving Ponzi schemes supported their calculation method. In March 2010, the bankruptcy court affirmed the Trustee’s determination, agreeing with the Trustee, SIPC, and SEC on their key arguments. The court agreed with the Trustee that net equity can be based on “securities positions” only to the extent that such positions are “ascertainable from the books and records of the debtor” or “otherwise established to the satisfaction of the trustee.” The court further agreed that in a Ponzi scheme like Madoff’s—in which no securities were ever ordered or acquired—that “securities positions” do not exist, and the trustee cannot pay claims based on the false premise that customer positions are what the account statements purported them to be. The court added that legitimate customer expectations based on false account statements “do not apply where they would give rise to an absurd result.” It said the Madoff customer statements “were bogus and reflected Madoff’s fantasy world of trading activity, replete with fraud and devoid of any connection to market prices, volumes, or other realities.”amounts evident from the Madoff firm’s books and records are customer Instead, the court said the only verifiable cash deposits and withdrawals. (For a fuller discussion of legal issues involving determination of net equity in the Madoff case, see appendix III.) The court also found that fairness and “the need for practicality” favored NIM. It concluded that payments from the SIPC fund were inextricably connected to payments from customer assets, rejecting the argument by FSM proponents to the contrary. Thus, use of FSM for SIPC advance payments would diminish the amount available for distribution from the customer asset fund. Because there are limited customer funds, any funds paid to reimburse fictitious profits would no longer be available to pay other claims. The court also agreed with the Trustee that NIM was more compatible with efforts to recover assets. The court said that customer withdrawals made in furtherance of a Ponzi scheme, and specifically, withdrawals based on fictitious profits, can be subject to recovery actions. NIM harmonizes the definition of net equity with clawback actions, by similarly discrediting withdrawals based on fictitious profits, and unwinding, rather than legitimizing, the fraud. The court noted that FSM, by contrast, would base compensation to customers on the same withdrawals the trustee has the power to seek to recover. In August 2011, the Court of Appeals for the Second Circuit affirmed use of NIM as the appropriate method in the Madoff case. The appeals court found that while SIPA does not prescribe a single method for determining net equity in all situations, the Trustee’s use of NIM was the best proposed method given the statutory definition of net equity. The court noted that use of FSM would have the absurd effect of legitimizing the arbitrarily assigned paper profits that Madoff’s fraud produced. The court emphasized that while FSM may be appropriate in typical situations, the nature of the Madoff Ponzi scheme, including “extraordinary facts” of the Madoff fraud, point toward use of NIM. The court rejected the claimants’ characterization of SIPA as providing an “insurance guarantee” against Madoff’s fraud; rather, it said, SIPA does not clearly protect against all fraud committed by brokers, or insure investors against all losses. According to information we reviewed, the differences in customer net equity under the two approaches is significant, because during the decades of his fraud, Madoff reported considerable investment gains to his investors. According to SIPC, customer claims allowed under NIM total about $17.3 billion, while under FSM, the total would be approximately $57.2 billion. Table 3 shows a comparison of claims, broken down by account size, under the as-adopted NIM and the as- proposed FSM. As table 3 shows, the number of accounts that potentially would have allowable claims under FSM nearly doubles from the corresponding number under NIM. This is because FSM generally accepts customer statements as accurate representations of holdings, and thus even those customers that withdrew more than they invested—net winners—would also be entitled to have their claims approved. Total account value would more than triple. However, this does not necessarily mean that customers would recover their statement amounts under FSM. Rather, the amounts distributed to customers will depend on how much the Trustee can recover during the liquidation. If the amount recovered is less than the amount of allowed claims—as is currently expected—then customers receive payments based on their relative share of total claims. Thus, the significance of using different methods for calculating net equity is that the different methods can affect customers’ relative shares of total claims. In turn, that affects the amount of money they ultimately receive. Although SEC supported the Trustee’s decision to use NIM, SEC’s position differed from the Trustee’s and SIPC’s in one respect: When SEC commissioners voted to support NIM, they also said customer deposits and withdrawals should be adjusted for inflation. According to SEC staff, such adjustments would account for the length of time the Madoff firm held customer funds. This has become known as the “constant dollar approach.” To date, neither the bankruptcy court nor the appeals court has addressed the merits of the SEC position. SEC officials told us they see the constant dollar approach as a way to treat customers more fairly and equally. SEC’s consideration of the constant dollar approach arose from the agency review, as described earlier, of potential methods for calculating customer claims. SEC officials told us that after they rejected FSM and adjustments based on Treasury notes, study continued on whether another method consistent with SIPA would allow customers to recover more money. However, the focus of their efforts shifted from investments that Madoff claimed to have made but did not, and toward the time value of money, pegged to when customers made their investments, so that customers would be treated equivalently in real dollar terms. The concept was that this would recognize the long duration of the Madoff fraud. Under the constant dollar approach, a customer’s series of deposits and withdrawals over time would be adjusted for inflation and converted into dollar amounts that reflect current price levels. The simplest instance would be a single customer deposit made years ago that would be converted into current dollars based on price changes over the specified period. For example, according to the Consumer Price Index, the value of a $10,000 deposit made 20 years ago would be $16,156 in 2012 dollars. Calculations would become more involved with multiple deposits and withdrawals over time, but the basic reasoning of converting past transaction amounts into current dollars would be the same, according to SEC officials. SEC officials told us their analysis indicated this approach could be consistent with case law. Although case law has not specifically recognized inflation adjustments, they said, it does provide support for the general notion of seeking to treat investors equally. Translating that concept to the Madoff case, SEC viewed inflation-adjustment as a way to treat customers equally over time, during which price inflation would occur. In a memorandum to commissioners, SEC’s Office of General Counsel said that failing to do so would ignore the effects of inflation on innocent investors and treat early victims of the fraud inequitably compared with later investors. SIPC senior management disagrees with SEC’s analysis and conclusion, saying the statute provides no authority for inflation adjustment and that no such authority can be inferred or implied. According to SIPC, determination of net equity is a specified mathematical function, and the notion of adjusting net equity determinations for inflation is an SEC- created approach that the statute does not support. SIPC senior management also noted that adjusting customer claims for inflation has never come up before in any other SIPC case, because the fraud in the Madoff case is atypical in having such a lengthy duration. While inflation calculations likely could be done, there would be large costs in doing so, given the scope of the case and the number of transactions. SIPC senior management further noted that if inflation-adjustment were permitted, the size of some claims would increase. Because the pool of funds to satisfy customer claims is fixed, larger payouts to some could depress payments to others, according to SIPC senior management. This could lead to litigation among customers because some net winners could become net losers.that illustrated how claims could change significantly. It showed a beginning balance of $130,000 in 1992, followed by a series of 23 withdrawals totaling $145,900 made through 2008. Thus, the customer had withdrawn $15,900 more than initially invested, and under NIM, is a net winner whose claim would be denied. But after adjusting the sequence of transactions for inflation, based on specific timing and amounts, the customer would become a net loser—having withdrawn $29,829 less, in inflation-adjusted dollars, than originally contributed. We reviewed one sample of an inflation-adjusted Madoff account The Trustee told us he did not consider a constant dollar approach, as it is not in the statute or supported by case law. He concurred with SIPC that claim amounts could increase considerably. As an example, he said, if a 9 percent annual interest rate, as allowed under New York fraud law, were applied, claims could grow by tens of billions of dollars, from their currently approved $17.3 billion. Following disclosure of a conflict of interest by a former SEC official in February 2011, SEC has plans to reconsider its position on supporting adjusting customer accounts for inflation. With the filing of a clawback suit by the Trustee against SEC’s former General Counsel, it became public that the former official and his brothers had inherited a Madoff account from their mother. In a report on the matter, the SEC Inspector General recommended—and the SEC Chairman agreed—that the commission should reconsider the inflation-adjustment issue because of concerns about the former General Counsel’s participation in SEC’s decision- making process. The involvement of the SEC general counsel’s office in the net equity issue began in January 2009, before the former General Counsel took his position on February 23, 2009. Thus, while the former general counsel became involved in the review, he did not initiate it. The Inspector General recommended that commissioners revote to avoid any possible bias or taint. The SEC Chairman has directed commission staff to review whether commissioners should readopt the constant dollar approach. Through October 31, 2011, the Trustee reported spending of $451.8 million for liquidation activities, with final costs expected to exceed $1 billion through 2014. To date, the two largest components of these costs have been legal costs of the Trustee and trustee’s counsel, and costs for consultants. Although the Madoff case is expected to be SIPC’s most costly case to date, the ratio of total costs to customer distributions is lower than for some other SIPC cases. Through October 31, 2011, the latest date for which information was available, total administrative costs of the Madoff liquidation—ranging from office expenses to professional services—reached approximately $452 million. As shown in figure 3, the two major components have been legal costs, chiefly for time spent by the Trustee and his counsel, and consultant costs, for work such as investigating fraudulent activities of the Madoff firm and analyzing customer accounts. Legal costs represent the largest expense, according to a series of interim reports the Trustee has filed with the bankruptcy court, plus other information we reviewed. The Trustee told us that total administrative costs are estimated to reach $1.094 billion through 2014. The $1.094 billion for the Madoff case is approximately double the combined costs of $512.6 million for all 315 previously completed SIPC customer protection proceedings from 1971 Overall, a through 2010, the latest year for which information was available.Ponzi scheme fraud is not necessarily intrinsically more expensive to handle, according to the Trustee. For instance, in the Madoff case, forensic analysis to determine what occurred at the firm has been similar to investigations in other Ponzi scheme cases. However, the Madoff case stands out for the duration of the fraud, its size, and the number of people involved, according to the Trustee, SEC officials, and SIPC senior management. Although the Trustee directs the liquidation, the bulk of the costs of the liquidation are those associated with the legal work performed by attorneys of the law firm acting as the trustee’s counsel. That firm, Baker Hostetler, performs work that includes assisting the Trustee’s investigation; asset search and recovery, including related litigation; case administration; and document review. In addition to the Trustee’s interim reports, periodic cost applications filed with the bankruptcy court for approval contain more detailed information on costs incurred by the Trustee and trustee’s counsel. Our review of these cost applications, which cover from December 2008 through May 2011, found that costs for the Trustee and trustee’s counsel were $230 million for this period (see table 4). These costs reflect a substantial number of hours—597,052—that the Trustee and trustee’s counsel have billed (see table 5). For the most recent reporting period, covering February to May 2011, about 100 partners, who are the most senior staff in the law firm, and 200 associate attorneys, worked on the case. Our review of costs for the Trustee and trustee’s counsel also identified several trends within the overall amounts. Attempts to recover assets are driving costs. As shown earlier in table 4, the Trustee’s costs alone are relatively small compared to the trustee’s counsel costs. Within this larger category, costs for litigation to recover assets have risen sharply to account for a large majority of the trustee’s counsel costs. As of December 2011, the Trustee told us about 1,050 lawsuits have been filed as part of efforts to recover assets on behalf of customers. These recovery actions are international in scope, with the Trustee reporting more than 70 actions involving foreign defendants. For example, actions have been filed in the United Kingdom, Bermuda, the British Virgin Islands, Gibraltar, and the Cayman Islands. According to the Trustee, international investigations have involved identifying the location and movement of assets abroad, becoming involved in litigation brought by third parties in foreign courts, bringing actions before U.S. and foreign courts and government agencies, and hiring international counsel for assistance. As figure 4 shows, asset recovery actions—that is, avoidance or clawback actions—have outpaced all other trustee counsel costs as the case has progressed. According to SIPC senior management, the considerable expenses of the actions have been worthwhile, as the Trustee has produced $8.7 billion in recoveries for customers thus far. Partner hours have been declining. In general, billing rates for partners at the trustee’s counsel firm are higher than rates for associate attorneys. Thus, the more work partners handle, the higher the costs; while the more work that associates perform, the lower the costs. Our review showed that partner hours as a fraction of total hours claimed by the trustee’s counsel have been declining steadily, from about 42 percent near the beginning of the case to about 28 percent in the most recent period (see fig. 5). The Trustee told us the partner hours have been declining as case activity has shifted. Through the end of 2010, as the Trustee and trustee’s counsel were busy preparing to file the many complaints brought as part of the liquidation, partners were heavily involved in case preparation and policy decisions. Later, as cases moved into litigation, associate attorneys handled more of the load. Higher-cost people have performed more work. Although the proportion of hours attributable to partners has been declining, we also found that within each category of professional work at the trustee’s counsel—partners, associates, and nonlegal staff—higher-cost people have been performing a larger share of work. We examined the distribution of costs at two points during the Madoff liquidation: the second cost application following start-up of the case (covering May to September 2009), and the most recent cost application (covering February to May 2011). Figure 6 illustrates our findings, showing results for the partner category as an example. Partners whose billing rates are in the top 20 percent (the top quintile) of all billing rates for partners working on the Madoff case accounted for a greater share—about a third—of all partner hours compiled, and more than 40 percent of all partner billings in dollars. By contrast, partners whose billing rates are in the bottom 20 percent (the bottom quintile) accounted for a smaller share of activity—about 12 percent of hours compiled, and about 7 percent of all partner billings in dollars. Middle quintiles followed the same trend. We found that similar patterns applied for associate attorneys, and nonlegal The Trustee attributed staff such as paralegals, clerks, and librarians.this trend to differences in billing rates among Baker Hostetler offices. Most case activity takes place in New York, where rates are higher than elsewhere. Attorneys in other offices, where rates are lower, provide assistance to New York-based lawyers, the Trustee said. Limited guidance is available to assess the reasonableness of legal costs, such as those incurred in the Madoff case. The American Bar Association (ABA) publishes “model rules,” or recommended professional standards, including a model rule on professional conduct, which includes legal costs. The rules are only advisory, but according to ABA, nearly every state patterns its professional conduct rules on the ABA model rule. According to ABA, there is no formula for determining whether costs charged in specific situations—or, in the case of the Madoff case, to hundreds of individual instances of litigation—are reasonable. Rather than provide a formula, the ABA model rule focuses on reasonability of legal costs and provides a number of qualitative factors that can be considered in evaluating reasonableness of attorney costs. Among the factors are the time and labor required; the novelty and difficulty of the questions involved; the skill needed to perform the legal service properly; and the experience, reputation, and ability of the lawyer(s) performing the services. In addition to the costs for the Trustee and his counsel, there have been a number of other professional costs in the Madoff case. Largest among them, according to the Trustee, have been $178.2 million in consultant costs. These costs include, for example, forensic accounting services performed as part of the fraud investigation. While legal costs have been increasing, consultant costs have been decreasing, reflecting their prominence earlier in the case (table 6). Other types, for example, include investment banker fees and SEC receiver expenses. customers in completed SIPC cases. We grouped these cases on an annual basis, focusing on years in which there were at least $50 million in distributions. As shown in figure 7, the currently estimated total costs of the Madoff liquidation, as a percentage of current recoveries, are within the range of costs incurred in previous SIPC cases. For individual years, the cost percentages have ranged from a low of 0.3 percent (2001) to a high of nearly 40 percent (1990, when there were considerable expenses in one relatively large case). For the Madoff case—which is not yet complete, and as discussed earlier, is atypical—the cost percentage is currently 11 percent, based on the latest estimate of total cost ($1.094 billion), and $8.7 billion in current recoveries from the Trustee’s efforts, the $326 million Internal Revenue Service (IRS) settlement, plus an expected $888.5 million in SIPC customer advances. The ratio for the Madoff case could change, depending on future costs incurred and if the Trustee secures additional recoveries. The 13 years shown in figure 7 cover 104 cases that together account for $15.5 billion in customer distributions. Total costs for all these cases equal 2.5 percent of customer distributions. We note, however, that results for one year—2001—reflect almost entirely the outcome of a single large case, in which a firm failed but recoveries were sufficient to reimburse all valid customer claims fully. Excluding 2001, total costs for all cases as a percentage of customer distributions are equal to 9.5 percent. In a SIPA liquidation, each of the main parties—the trustee, SIPC, the bankruptcy court, and SEC—has a role in examining costs. These roles vary by the party and the stage of the proceeding. The Trustee noted that while he previously was at SEC and when serving as U.S. trustee, he had experience reviewing fee applications. He described a variety of ways by which he seeks to hold down expenses of the Madoff liquidation. The general process for approval of costs begins with the Trustee, who reviews them before submitting them to SIPC for its review, prior to submission to the bankruptcy court. For billings, the Trustee conducts a two-level review of Madoff-related time entries. Following completion of work, a mid-level attorney reviews the billings, and then a partner conducts another review. This second review is in tandem with SIPC, the Trustee told us. The purpose of these reviews is to determine whether too much time has been billed for a particular task. If so, it is written off, the Trustee said. Information the trustee’s counsel produced for us, covering from inception through January 2011, showed about 1 percent of hours worked not being billed, with about another 1.5 percent of hours being written off after review. The Trustee also said that he does not bill for 5 percent to 10 percent of the time he spends on the case. The Trustee said a similar review of billings takes place for costs submitted by other law firms and consultants that the Trustee and trustee’s counsel use in their work. The Trustee said that in some cases, amounts claimed are reduced. However, the Trustee did not have specific amounts for any such reductions. The outside entities also discount their billings at least 10 percent, as the Trustee and trustee’s counsel do, with some providing 11 or 12 percent discounts. In addition to billing reviews, the Trustee also described other approaches intended to help ensure that costs incurred are necessary and reasonable. Teams. Using teams, in which the same people work on similar matters, to help achieve greater consistency and efficiency. For example, the Trustee uses teams for different tasks, such as motions, discovery, and litigation. For litigation, for example, the trustee’s counsel has set up about 16 teams, which work on similar topics, such as employee-related, review of charities, or family-related matters. The teams follow cases from beginning to end, taking advantage of experience gained through the process and limiting additional costs that could occur if staff were assigned work in unfamiliar areas, according to the Trustee. Digitizing information. Computerizing information as much as possible allows for faster, more efficient retrieval of information. This has involved significant up-front costs, but the Trustee noted that it reduces costs over time by avoiding the need to undertake time- consuming, expensive manual searches through thousands of boxes of paper material. Budgeting work in advance. The Trustee said he uses a process in which consultants must fill out project forms and provide budgets, which are submitted and must be approved by trustee’s counsel. SIPC also receives some of these budgets. When bills are received later, the trustee’s counsel compares the amount claimed with the budget, and there have been instances in which costs exceeding budgeted amounts have been refused, according to the trustee. When the Trustee has completed in-house review of costs, he presents them to SIPC for review. The Trustee may hold informal discussions with SIPC before submitting actual costs for formal consideration. SIPC also may contact outside vendors directly to inquire about charges. SIPC does not pay for some charges, and following discussions with SIPC, the Trustee may decide to write off some costs, according to the Trustee. As SIPC senior management said is typical, at the outset of the case, they sought and obtained a 10 percent reduction in the hourly rates of the Trustee and trustee’s counsel. According to our review, this 10 percent reduction has produced savings of $25 million through May 2011. In addition, the Trustee and trustee’s counsel have provided additional reductions of $5.4 million over costs they said would customarily be billed. Some have suggested that SIPC should have sought a discount greater than 10 percent. For example, the SEC Office of Inspector General has reported that an SEC bankruptcy attorney raised questions whether a 10 percent discount for SIPC cases is sufficient. Similarly, the Inspector General noted to us that the Madoff case began during the recent financial crisis, when law firms’ business was suffering, and suggested that as a result, SIPC would have had strong leverage to negotiate lower compensation for firms. SIPC senior management, however, told us that a 10 percent reduction is appropriate for several reasons. Above that amount, service providers object, and a 15 percent discount is not economical for sophisticated work like that required in the Madoff case, according to SIPC senior management. Also, SIPC senior management noted that liquidation cases such as the Madoff matter draw highly qualified talent in opposing counsel, so that as a result, SIPC also must draw upon highly qualified providers. Furthermore, the 10 percent discount, coupled with “holdbacks”—in which payment of approved amounts is not released until later—amount to a significant burden on the service provider, according to SIPC. Finally, SIPC senior management said that the results the Trustee has produced to date support the costs incurred. For these reasons, SIPC has not sought a reduction greater than 10 percent in the Madoff case. In addition to the 10 percent discount, SIPC has also created guidelines for review and approval of costs. The guidelines cover matters including obtaining a fee discount; submitting costs; reviewing costs submitted; and documenting questions and discussions relating to the review of costs, including items flagged for attention or reduced or written off. Under these guidelines, a SIPC attorney reviews each time entry and expense item submitted, after which they prepare a memorandum to the SIPC general counsel, summarizing findings and making recommendations for approval. The general counsel is to review the memo and recommendations, before approving, modifying, or rejecting the cost request. SIPC senior management told us they followed the guidelines in the Madoff case. Also as part of this portion of the review process, SIPC’s general counsel makes a line-by-line review of Trustee and trustee’s counsel invoices, according to SIPC senior management. Because costs in the Madoff case are so much greater than in previous SIPC cases, the Trustee, working with SIPC, has established “litigation budgets” for the many lawsuits resulting from the case. These budgets detail expected costs of specific litigation, and for each case, divide tasks into specific categories, including research, drafting, motions, discovery, trial, appeal, and collection. According to SIPC senior management, this budgeting process is aimed at managing costs in advance or as they are being incurred, rather than after-the-fact. In addition, several SIPC personnel are in daily contact with the Trustee or trustee’s counsel. As a result, they are aware what activities are planned, and will discuss them ahead of time. They also discuss possible future actions. SIPC senior management told us the Trustee has revised certain planned actions or changed direction as a result of such discussions during the case. We sought details on the extent to which SIPC has reduced or disallowed expenses submitted by the Trustee. SIPC senior management declined to provide documentation of its cost reviews, citing attorney work-product privilege. According to SIPC, releasing such information could provide an unfair advantage to litigation opponents and undermine attempts to recover assets on behalf of customers. Similarly, SIPC noted that releasing the litigation budgets could allow an opposing party to see how much has been allocated for an activity in litigation, which can provide a tactical advantage to opposing parties. However, the trustee’s counsel provided us information on amounts written off at SIPC’s request. According to this information, the trustee’s counsel has written off, at SIPC’s request, less than 1 percent of hours submitted. For the Trustee, SIPC-requested reductions to billings have been about 0.02 percent, according to the information provided by the trustee’s counsel. SIPC senior management also declined to provide other cost review-related information we requested, again citing attorney work-product privilege. In the Madoff case, the bankruptcy court has a limited ability to oversee costs. As noted earlier, the Trustee and trustee’s counsel submit legal costs to SIPC, which reviews them, before filing with the bankruptcy court a recommendation on what the court should approve. SIPC files the recommendation after the trustee and trustee’s counsel file their detailed cost applications with the court. Under SIPA, the court must approve cost applications if two conditions are met: (1) if there is no reasonable expectation of SIPC recouping its advances, and (2) if SIPC recommends to the court that it approve the costs requested by the trustee and trustee’s counsel. In the Madoff case, both conditions have been met. For the first condition, SIPC does not anticipate recouping its administrative advances because it expects that recoveries by the Trustee will be insufficient to cover all approved customer claims. SIPC senior management told us that they have opted to devote all asset recoveries—of both investor funds and sale of assets of the Madoff firm itself—to repaying approved customer claims. If a trustee can recover assets that exceed the amount of allowed customer claims, SIPC has a priority claim on the excess assets, in order to recoup its advances to cover liquidation costs. However, based on expected recoveries in the Madoff case, SIPC senior management does not expect there will be any excess assets. The current Trustee estimate of allowed claims is $17.3 billion, compared with $9.1 billion in Trustee recoveries and settlements. Thus, about $8.3 billion in additional recoveries would be needed, and based on current Trustee assets, lawsuits filed, and the estimated possibilities for recoveries arising from that litigation, SIPC senior management does not now expect this level of recoveries to occur. For the second condition, SIPC has recommended that the bankruptcy court approve the legal costs requested in the applications submitted to the bankruptcy court. SIPC senior management told us that the statute requires the court to defer to SIPC’s judgment on the appropriateness of expenses because it is SIPC that faces the economic risk of covering the costs in situations where SIPC does not expect recoveries to be sufficient to recoup its advances. For the first seven rounds of approved cost applications to date, the bankruptcy court has approved all of the legal compensation and expense requests submitted by SIPC for the Trustee and trustee’s counsel. Although the court has been obliged to approve the cost applications because the two conditions have been met, the judge has said in hearings on the applications that notwithstanding the statutory requirement, he would nevertheless approve the costs on the basis of the work performed. Although SEC has oversight authority over SIPC, it does not have a direct role in approving costs incurred in any particular SIPC liquidation. Instead, fee exams typically take place as part of its general examinations of SIPC, and SEC officials told us they plan a review of Trustee and trustee’s counsel costs in coming months. In a March 2011 report, the SEC Inspector General noted that the Madoff and Lehman Brothers cases—the two largest liquidations in SIPC history—had focused new attention on concerns about the amount of trustee fees. The report made recommendations to improve SEC’s oversight of SIPC liquidation costs. For example, the report recommended that SEC encourage SIPC to negotiate more vigorously with court-appointed trustees to obtain fee reductions greater than 10 percent and to develop a more regular process for monitoring SIPC’s oversight of costs, rather than relying on examinations that do not occur regularly. The report also asked SEC to assess whether SIPA should be modified to allow bankruptcy judges presiding over SIPA liquidations to assess the reasonableness of administrative costs in cases in which SIPC pays the costs. The respective units of SEC indicated they concurred with these recommendations, and SEC officials told us that formal responses are being prepared. Trustees for SIPA liquidations generally have the same duties as trustees for liquidations under chapter 7 of the Bankruptcy Code. Under this chapter, a trustee must make certain information disclosures, including: furnishing information about the estate and the estate’s administration as requested by parties in interest, unless such disclosure is restricted by a court order; providing periodic reports and summaries of the operation of the bankrupt firm if it continues operating; and making a final report and filing a final account of the administration of the estate with the court and with the U.S. trustee. SIPA directs a trustee to make the disclosures required under chapter 7 but also directs a trustee to include in such reports information on progress made in distributing cash and securities to customers. In addition, SIPA directs the trustee to promptly investigate the acts, conduct, property, liabilities, and financial condition of the firm being liquidated and report this information to the court. The trustee must also report to the court any facts learned by the trustee regarding fraud, misconduct, mismanagement, and irregularities, and any causes of action available to the estate as a result; and, as soon as practical, submit a statement of the investigation. Through a variety of means, the Trustee has made disclosures that address the statutory requirements. As of January 2012, the Trustee had issued six interim reports to the bankruptcy court that outline progress made in liquidating the Madoff firm. These interim reports have been filed approximately every 6 months. The first report, filed in July 2009, gave the status of the Trustee’s activities in administering the estate, his progress in addressing customer claims, and results to date from his investigation of the Madoff firm’s activities. The report also included a discussion of Madoff’s fraudulent scheme, including his admitting to soliciting billions of dollars under false pretenses and failing to invest customer funds as promised. The Trustee said that extensive investigation of the firm’s financial affairs inside and outside the United States revealed “a labyrinth of interrelated international funds, institutions, and entities of almost unparalleled complexity and (breadth).” The Trustee also noted that he was providing information to, and coordinating efforts with, other parties investigating the firm, including SEC, the Federal Bureau of Investigation, the U.S. Attorney’s Office, and other regulators. The Trustee’s other five interim reports provide information on similar issues, including the status of the investigations. The Trustee has also provided various records to the court, as part of litigation involving his activities, which provide disclosures of the type required under the Bankruptcy Code and SIPA. For example, in a motion filed in October 2009 asking the bankruptcy court to affirm the use of NIM in determining customer claims, he included the report of a consultant hired to review the Madoff firm’s activities in detail. This report described, among other things, how little trading was done as part of the investment advisory business, and it also included statements from a Madoff firm employee who admitted to creating fake investment positions that were reported to customers on their statements. The Trustee also has provided information to individual Madoff customers. To address customer claims, the Trustee told us that he provided determination letters to Madoff customers, showing individual account transactions and how net equity for their accounts was determined. For customers with questions about their claims determinations, the trustee’s counsel was available to provide additional information, which in some cases, involved sharing information contained in the records maintained by the Madoff firm. The Trustee has also provided information to former Madoff account holders seeking information necessary for tax returns or for filing fraud claims under homeowner’s insurance coverage. The Trustee told us he has not provided information about the fraud in general because individual customers do not need such information to have their claims processed. To facilitate access to customer records, the Trustee has created an “electronic data room.” Initially, access was limited to customers sued by the Trustee that were determined to be net winners—those who withdrew more than they invested—and who were deemed to have acted in good faith without knowledge of the fraudulent nature of the firm’s activities. In January 2012, the bankruptcy court judge granted a motion by the Trustee to expand access to the data room to attorneys for nongood faith defendants with whom the Trustee is in litigation. In addition, the Trustee maintains a public website that contains a large volume of information about the case. It includes a timeline of the liquidation and provides data on the amount of customer assets recovered, amounts distributed to customers, and amounts committed by SIPC to date. The website also includes more than 600 selected court filings, which are provided in a searchable database with the original documents available for download. These documents date to the start of the Madoff liquidation in December 2008. All six interim reports filed by the Trustee, plus amendments, are included. In addition, the website provides information on the claims process, including links to SIPA, SIPC, and orders of the bankruptcy court. The website also has a page for the Trustee’s hardship program, under which the Trustee does not seek to recover assets from customers suffering from particular financial or other hardships. Recently, expert reports produced as part of the Trustee’s investigation have been made public, which the Trustee said contain extensive details on proof of fraud at the Madoff firm and its subsequent insolvency. Although some parties have argued that the Trustee’s disclosures have not met statutory requirements, SIPC and SEC officials told us they view the Trustee’s disclosures to date as sufficient. SIPC senior management told us that early in the case, the Trustee did not release many details, to avoid tipping off potential civil and criminal defendants that would become target of legal actions. More recently, according to SIPC, as that concern eased, the Trustee has been doing an exemplary job in releasing information relevant to account holders and the public. SIPC senior management told us they expect the Trustee will file a complete report of his investigative activities after officials are satisfied that legal actions and investigations will not be endangered. While the Trustee has not yet issued such a report, complaints filed in the case provide a considerable amount of information that will eventually be released, they said. SEC officials also told us that while the statute provides no standards for the extent of disclosures that must be made, the Trustee has made considerable information available, which appears to be complete for the relevant topics.Trustee has made, but there has been no lack of information about them. The officials said some may not like decisions the An attorney representing former Madoff customers offered a different view to us, saying the Trustee has not provided information critically important for account holders making claims and those who are the subject of clawback actions by the Trustee. In particular, according to the attorney, while the Trustee has asserted that all reported trading activity was fictitious, and that the Madoff investment advisory arm operated independently from the rest of the Madoff firm, that cannot be established from information released thus far. The attorney told us he believed that more complete disclosure would show at least some legitimate trading activity on behalf of customers, which is important because investment returns from that activity would affect claims determination and what amounts the Trustee could seek to claw back. In April 2010, attorneys representing various Madoff customers filed a motion with the bankruptcy court to compel additional disclosures by the Trustee, arguing that reports filed “discuss the nature of his investigation in sweeping terms, with a bare minimum of detail and only conclusory statements about what has actually been uncovered.” However, the Trustee told us he believes he has made great efforts to respond to the public, noting his interim reports, a recent redesign of the website, and his attempts to update case statistics at least every couple of weeks. By contrast, he said in a typical chapter 7 bankruptcy case, the only information available would be documents filed with the court. According to the Trustee, there is no information, other than litigation-related, that individual account holders might want but have not been able to get. In his brief opposing the customers’ motion to compel additional reporting, the Trustee said that in his many filings seeking recovery of customer assets, he has detailed the Madoff fraud and identified those he alleges were involved or knew of the fraud. He argued that the customers seeking additional disclosure were seeking to sidestep the Federal Rules of Civil Procedure and Bankruptcy Procedure that govern disclosure in litigation. These rules, covering what is known as the “discovery process,” address matters such as parties making inquiries of each other and requests for the production of documents. As litigation proceeds, customers seeking greater disclosure will receive information through the discovery process and will have opportunity to access and challenge the Trustee’s evidence, according to the Trustee. The attorney representing former Madoff customers, however, told us that while the discovery process will provide an opportunity for disclosure of some information, that process will be prohibitively expensive for many customers, and in any case, is not likely to develop case-wide information of value to all customers. The bankruptcy judge denied the customers’ motion to compel additional disclosures from the Trustee, calling the action a discovery dispute, rather than a failure by the Trustee to follow the statute. He said the Trustee has satisfied his disclosure obligations under SIPA and the Bankruptcy Code “by creating a thorough and specific record regarding Madoff’s fraud.” Affirming the Trustee’s position, the judge said the demands will be satisfied during court-regulated discovery as litigation proceeds. The customers filed a motion for leave to appeal the bankruptcy court ruling, which the district court denied. When a broker-dealer firm fails, and large sums of customer assets could be at risk, SIPC must move quickly to appoint a trustee and trustee’s counsel in order to safeguard those assets and to maximize the possibility of any recoveries for customers. Toward that end, SIPC maintains an informally assembled roster of candidates, and its senior management confers before the SIPC president uses his professional judgment to select a trustee. In the Madoff fraud, SIPC moved to appoint a trustee and trustee’s counsel within hours after Madoff was taken into custody. Notwithstanding the need to move quickly, however, our review identified two areas in which SIPC’s selection process could be improved. First, while SIPC seeks to identify potential trustees for its liquidations, it lacks a formal, documented outreach procedure for identifying those candidates. Although SIPC believes the field of broker-dealer bankruptcy is sufficiently small that the relevant parties are known, undertaking additional efforts to more systematically identify candidates would help ensure that the range of choices reflects the widest capabilities available in the most cost-effective fashion. Such outreach efforts could be tailored for SIPC’s purposes, so that they are not excessively time-consuming or resource-intensive. Second, while SIPC draws on the experience and expertise of its senior management in selecting trustees, that process, including criteria for selection, is not documented or transparent. This lack of transparency can contribute to questions and concerns about SIPC’s decisions. Better documentation of the selection process and criteria could help address some of these concerns. To help ensure that the pool of providers that could be employed in SIPC liquidations is as broad as reasonably possible, and to improve the transparency of SIPC’s selection of trustee and trustee’s counsel for liquidations, the SEC Chairman should take the following two actions: 1. Advise SIPC to document its procedures for identifying candidates for trustee or trustee’s counsel, and in so doing, to assess whether additional outreach efforts should be adopted and incorporated. 2. Advise SIPC to document its procedures and criteria for appointment of a trustee and trustee’s counsel for its cases. We provided a draft of this report to SEC, SIPC, and the Trustee for their review and comment, and we received written comments from SEC and SIPC, which are reprinted in appendixes IV and V, respectively. In their comments, SEC and SIPC concurred with our recommendations. The director of SEC’s Division of Trading and Markets said the division will recommend that the SEC Chairman implement our recommendations. The SIPC President said SIPC will make plans to implement them immediately. Regarding documenting and assessing its outreach efforts for identifying trustee and trustee’s counsel candidates, the SIPC President said such efforts may lead to expansion of its file of potential service providers and thus allow SIPC to choose from a broader base. To achieve this, he indicated SIPC will explore expanding SIPC’s contacts with relevant professional organizations, to locate qualified people and firms that SIPC has not previously encountered. Regarding documenting the process by which SIPC designates a trustee and trustee’s counsel, the SIPC President’s letter indicated that there is nothing in our recommendation that would delay or slow SIPC’s progress, and that the need for transparency can be achieved as well. The SIPC President also said that in implementing both recommendations, SIPC will consult with SEC. SIPC, SEC, and the Trustee also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the SEC Chairman, the SIPC President, and the Trustee for the Madoff liquidation. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202)-512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. This report discusses (1) how the Trustee and trustee’s counsel were selected for the Bernard L. Madoff Investment Securities, LLC liquidation; (2) the process and reasoning for the selection of “net investment method” (NIM) in determining customer claims arising from the Madoff fraud; (3) the costs of the subsequent liquidation of the Madoff firm; and (4) the information that the Trustee has disclosed about his investigation and activities. To examine how the Trustee and trustee’s counsel were selected for the Madoff liquidation, we reviewed the requirements of the Securities Investor Protection Act (SIPA) for the selection of a trustee, plus court filings, correspondence and records of the Securities Investor Protection Corporation (SIPC), Standards for Internal Control in the Federal Government, the Internal Control – Integrated Framework of the Committee of Sponsoring Organizations of the Treadway Commission, biographical information for the Trustee, and relevant portions of the Bankruptcy Code. We also interviewed SIPC senior management, officials of the Securities and Exchange Commission (SEC) and the SEC Office of Inspector General (SEC IG), and the Trustee and members of the trustee’s counsel law firm. To examine the process and reasoning for the selection of NIM, we reviewed court filings, in particular those related to the Madoff fraud; and the Trustee’s determination to use NIM, as well as a subsequent challenge to that decision. We examined SIPC correspondence and records, including information on open and closed SIPC cases (Ponzi scheme cases in particular), and customer claims under NIM and the final statement method (FSM). We also reviewed SIPC rules, annual reports, and board meeting minutes. We reviewed SEC correspondence and records, including consideration of net equity methods, arguments presented to the agency in support of FSM, and commission meeting minutes. We reviewed findings of the SEC IG. Additionally, we interviewed SIPC senior management, SEC officials, the SEC Inspector General, and the Trustee and members of the trustee’s counsel law firm. To examine the costs of the Madoff liquidation, we analyzed cost information from interim reports submitted by the Trustee to the bankruptcy court, covering the period from December 2008 to September 2011; cost requests submitted by the Trustee and trustee’s counsel for approval by the bankruptcy court, covering the period from December 2008 through May 2011; and other records. We discussed with the Trustee, trustee’s counsel, and SIPC their process for verifying costs submitted. Because this cost information is prepared for or approved by the bankruptcy court, we determined that no additional steps were necessary to assess its reliability and that this data was sufficiently reliable for our purposes of identifying total costs, cost components, and trends. We also reviewed SIPA provisions related to review and approval of legal costs, and SIPC guidance on trustee compensation and review of liquidation costs. We reviewed an American Bar Association model rule on the reasonability of legal fees, and an SEC IG report on SEC’s oversight of SIPC costs. In addition, we interviewed SIPC senior management, SEC officials, the SEC IG, and the Trustee and members of the trustee’s counsel law firm generally on the topic of Madoff liquidation costs. To examine what information the Trustee has disclosed about his investigation and activities, we reviewed SIPA’s disclosure requirements and the duties of trustees under chapter 7 of the Bankruptcy Code. We also reviewed court filings related to the Trustee’s disclosures of information and his interim reports. We examined information the Trustee has made public about the investigation, including material on his website. In addition, we interviewed SIPC senior management, SEC officials, and the Trustee and members of the trustee’s counsel law firm. We conducted this performance audit from October 2011 to March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform our audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Since 1990, the Securities Investor Protection Corporation (SIPC) has assessed its member broker-dealers varying rates to support the fund used to protect customers of failed securities firms. Over this period, members have paid assessments to the fund based on different percentages of either their gross revenues or net operating revenues, or have paid a flat-rate amount. Customer claims in a Securities Investor Protection Act (SIPA) liquidation are based on customers’ “net equity” as of the filing date (Dec. 11, 2008, in the Madoff case). The statute generally provides that net equity is what would have been owed to the customer if the broker-dealer had liquidated the customer’s “securities positions,” less any obligations of the customer to the firm. Overall, each customer’s net equity determines the value of each claim. In particular, it determines their pro rata share from the customer property portion of the insolvent broker-dealer’s estate, as well as the amount of any advance payment from the Securities Investor Protection Corporation (SIPC) fund to which the customer may be entitled. The Trustee chose the “net investment method” (NIM), which focuses on investments made and not profits reported, to determine net equity. Claimants challenged the method, and it was upheld first by the U.S. Bankruptcy Court for the Southern District of New York. Court of Appeals for the Second Circuit affirmed the bankruptcy court decision.the positions of the other parties, and the two judicial decisions. Later, the U.S. The issue of how to determine net equity in the Madoff case primarily involved a choice between two methods with different impacts on the two main classes of customers. As is generally true of Ponzi scheme frauds, the Madoff claimants were “net winners” or “net losers.” The net winners were those customers who had withdrawn the full cash amount they had invested in the Madoff firm before its collapse, plus some “profit” (that is, fictitious gains that actually came from funds provided for investment by others). The net losers were customers who had paid in more than they had withdrawn at the time the Madoff firm collapsed. Securities Investor Protection Corp. v. Bernard L. Madoff Investment Securities LLC (In re Bernard L. Madoff Investment Securities LLC), 424 B.R. 122 (Bankr. S.D.N.Y. 2010). The two competing methods for calculating net equity were NIM and the “final statement method” (FSM). NIM calculates what customers are owed as the amounts they invested, less amounts withdrawn. FSM calculates net equity based on the amounts shown as customers’ securities positions on the last statements received from the broker- dealer firm; in the Madoff case, as of November 30, 2008. SIPC and the Securities and Exchange Commission (SEC) both supported the Trustee’s selection of NIM. for use of FSM. These claimants, most of whom were net winners,challenged the Trustee’s choice of NIM. The legal arguments of the parties are reflected in the bankruptcy court and Court of Appeals opinions. In addition, the bankruptcy court opinion included an exhibit that outlined the competing arguments in detail. The issue of how to determine net equity in the Madoff case turned on two key SIPA provisions: One is the definition of “net equity” in section 16(11) of the act, which generally requires the trustee to determine a customer’s net equity by “calculating the sum which would have been owed by the debtor to such customer if the debtor had liquidated, by sale or purchase on the filing date . . . all securities positions of such customer . . . minus . . . any indebtedness of such customer to the debtor on the filing date . . .” (emphasis added). SEC’s position differed from the Trustee’s in one respect. SEC advocated adding an inflation adjustment to customers’ NIM claims, to compensate them for the time value of their money. It referred to this as the “constant dollar approach.” See 424 B.R. at 125, n. 8. Neither the bankruptcy court nor the Court of Appeals have addressed the merits of the SEC position thus far. The other is section 8(b) of the act, which requires the Trustee to determine net equity claims “insofar as such obligations are ascertainable from the books and records of the debtor or are otherwise established to the satisfaction of the trustee.” The Trustee, supported by SIPC and SEC, took the position that because the statements customers received from the Madoff firm were fictitious, they did not show “securities positions” that could be relied upon for purposes of the net equity determination. Instead, the only Madoff firm records that reflected reality were those recording the cash deposits and withdrawals of customers. Thus, the Trustee argued, the plain language of section 8(b) required the trustee to determine net equity based on these records, since they provided the only obligations that could be established from the Madoff firm’s books and records. Accordingly, in his view, NIM was the only legally permissible option. The Trustee also contended that fairness considerations strongly supported use of NIM. Using FSM would exacerbate Madoff’s fraud and enable some Madoff customers to retain “profits” that were in reality the misappropriated investments of other customers. Moreover, FSM would divert the limited customer assets available in the bankrupt estate by paying imaginary “profits” at the expense of reimbursing real losses. The Trustee also argued that using FSM could conflict with his obligation to recover the fictitious profits paid out by the Madoff firm through avoidance actions. “What The Customer Gets. A customer generally expects to receive what he believes is in his account at the time the stockbroker ceases business. But because securities may have been lost, improperly hypothecated, misappropriated, never purchased or even stolen, this is not always possible. Accordingly, when the customer claims for a particular stock exceed the supply available to the trustee in the debtor’s estate, then customers generally receive pro rata portions of the securities claims, and as to any remainder, they will receive cash based on the market value as of the filing date.” FSM advocates also argued that the profits Madoff reported, while fictitious, may have been withdrawn and spent years ago; that customers paid taxes on them; and they may have foregone other investment opportunities in reliance on investment results shown in their statements. Furthermore, they maintained that, at least in the case of advances from the SIPC fund, use of FSM would not limit payments to reimburse net losers for their losses. They viewed the SIPC fund as a payment source for customer claims that operated separately and independently from any customer assets in the bankrupt estate. Thus, all claimants, both net winners and losers, could potentially receive up to $500,000 from the SIPC fund without any decrease in customer property. Finally, both sides contended that judicial precedent dealing with SIPA liquidations involving Ponzi scheme cases (discussed in the following section) supported their calculation method. “The Court recognizes that the application of the Net Equity definition to the complex and unique facts of Madoff’s massive Ponzi scheme is not plainly ascertainable in law. Indeed, the parties have advanced compelling arguments in support of both positions. Ultimately, however, upon a thorough and comprehensive analysis of the plain meaning and legislative history of the statute, controlling Second Circuit precedent, and considerations of equity and practicality, the Court endorses the Trustee’s Net Investment Method.” Specifically, the court agreed with the Trustee that sections 16(11) and 8(b) of the act must be read together, so that net equity can be based on “securities positions” only to the extent that securities positions are “ascertainable from the books and records of the debtor” or “otherwise established to the satisfaction of the trustee.” The court further agreed that in a Ponzi scheme case like the Madoff fraud, where no securities were ever ordered or acquired, securities positions did not exist, and the Trustee cannot satisfy claims by relying upon fictitious account statements that provided fictitious securities positions. Instead, only cash deposits and withdrawals were verifiable from the books and records of the Madoff firm. based on false account statements “do not apply where they would give rise to an absurd result.” Id. at 135. “SIPC payments therefore serve only to replace missing customer property and cannot be ascertained independently of the determination of the customer’s pro rata share of customer property. Accordingly, the SIPA statute does not allow bifurcation of the claims process, with customers recovering SIPC payments based on the Statement Method, and recovering customer property shares based on the Net Investment Method.” “While the Court recognizes that the outcome of this dispute will inevitably be unpalatable to one party or another, notions of fairness and the need for practicality also support the Net Investment Method.” “As distribution of customer property to the ‘equally innocent victims’ of Madoff’s fraud is a zero-sum game, equity dictates that the Court implement the Net Investment Method. Customer property consists of a limited amount of funds that are available for distribution. Any dollar paid to reimburse a fictitious profit is a dollar no longer available to pay claims for money actually invested. If the Statement Method were adopted, Net Winners would receive more favorable treatment by profiting from the principal investments of Net Losers, yielding an inequitable result.”424 B.R. at 134. The bankruptcy court also agreed with the Trustee that NIM was more compatible with trustee avoidance powers under the Bankruptcy Code. “The Trustee relies on numerous cases, all holding that transfers made in furtherance of a Ponzi scheme, and specifically transfers of fictitious profits, are avoidable. The Net Investment Method harmonizes the definition of Net Equity with these avoidance provisions by similarly discrediting transfers of purely fictitious amounts and unwinding, rather than legitimizing, the fraudulent scheme. The Statement Method, by contrast, would create tension within the statute by centering distribution to customers on the very fictitious transfers the Trustee has the power to avoid.” Finally, the bankruptcy court concluded that judicial precedent involving Ponzi scheme cases, including In re New Times Securities Services, Inc., supported use of NIM in the Madoff liquidation. New Times also concerned a SIPA liquidation arising out of a Ponzi scheme fraud. In New Times, some investors (known as “real securities claimants”) had been offered shares in real mutual funds, which the Ponzi schemer- debtor never purchased. Other investors (known as “fake securities claimants”) purchased shares in fictitious money market funds with fictitious names. The debtor generated monthly statements for both sets of investors that showed fictitious securities positions as well as interest and dividend earnings. The SIPA trustee in New Times treated the two sets of investors differently. He determined that for those investors whose fictitious statements reflected the purchase of real securities, their net equity for purposes of the act should be based on the positions shown in their statements—that is, he applied FSM. (This treatment was not before the court in New Times.) However, the trustee determined that for investors whose statements reflected earnings from the entirely fictitious funds, their net equity was limited to their initial investments—that is, he applied NIM to them. 371 F.3d 68 (2d Cir. 2004). This decision is often referred to as “New Times I” because of a somewhat related subsequent decision, In re New Times Securities Services, Inc., 463 F.3d 125 (2d Cir. 2006) (“New Times II”). The fake securities claimants appealed the trustee’s determinations to the federal district court. The district court sided with the investors, holding that their net equity should be calculated using FSM, recognizing the fictitious interest and dividend reinvestment earnings shown on their statements. The SIPA trustee then appealed the district court’s decision. SEC joined SIPC in maintaining that NIM should be used to determine the fictitious fund investors’ net equity. On appeal, the Court of Appeals for the Second Circuit endorsed the joint position of SIPC and SEC that net equity of the fake securities claimants should be based solely on their initial investments, excluding imaginary interest and dividends shown on the statements. The appeals court agreed that basing recoveries on fictitious interest and dividend amounts would be “irrational and unworkable.” In the Madoff litigation, both parties argued that New Times supported their position. The Madoff net winners argued they should be compared to the first group of New Times customers, who were supposedly invested in real mutual funds, because Madoff’s account statements showed positions in real securities. Because the real securities claimants in New Times had their net equity calculated by FSM, Madoff net winners argued they should likewise have their net equity calculated by FSM. Instead, the bankruptcy court endorsed the position of the Trustee, SIPC, and SEC by analogizing Madoff net winners to the fake securities claimants in New Times with their fictitious holdings, which led to NIM as the appropriate method by which to calculate their net equity. The court explained that the key precedent set by New Times regarding net equity analysis is that customer recovery cannot be based on account statements that contain numbers with no relation to reality, whether the securities are identifiable by name (as in Madoff) or not (as in New Times).court stated, would create “the absurdity of ‘duped’ investors reaping Reliance on fraudulent promises in account statements, the windfalls as a result of fraudulent promises.” The court also noted that the initial investments of real securities claimants in New Times were sufficient to acquire their initial securities, and subsequent statements listing earnings reflected actual market events. By contrast, initial investments by Madoff investors were “insufficient to acquire their purported securities positions, which were made possible only by virtue of fictitious profits . . . account activity was manipulated with the benefit of deliberately calibrated hindsight.” “Mr. Picard’s selection of the Net Investment Method was more consistent with the statutory definition of ‘net equity’ than any other method advocated by the parties or perceived by this Court. There was therefore no error. . . . The statutory definition of ‘net equity’ does not require the Trustee to aggravate the injuries caused by Madoff’s fraud. Use of the Statement Method in this case would have the absurd effect of treating fictitious and arbitrarily assigned paper profits as real and would give legal effect to Madoff’s machinations.” 654 F.3d at 235. “In holding that it was proper for Mr. Picard to reject the Statement Method, we expressly do not hold that such a method of calculating ‘net equity’ is inherently impermissible. To the contrary, a customer’s last account statement will likely be the most appropriate means of calculating ‘net equity’ in more conventional cases. We would expect that resort to the Net Investment Method would be rare because this method wipes out all events of a customer’s investment history except for cash deposits and withdrawals. The extraordinary facts of this case make the Net Investment Method appropriate whereas in many instances, it would not be. The Statement Method, for example, may be appropriate when securities were actually purchased by the debtor, but then converted by the debtor.” The Court of Appeals also rejected the FSM advocates’ characterization of SIPA as providing “an insurance guarantee of the securities positions set out in their account statements” which should “operate to make them whole from the losses they incurred as a result of Madoff’s dishonesty.” On the contrary, the Court of Appeals observed that SIPA did not necessarily protect against all forms of fraud committed by brokers or insure investors against all losses. The U.S. Court of Appeals for the Second Circuit has affirmed the Trustee’s use of NIM, but several legal issues remain. Courts have yet to rule on whether calculations of net equity under NIM should include an adjustment for inflation. A ruling supporting this “constant dollar” approach would stand to affect liquidation payouts for a significant number of Madoff customers. In addition, the Trustee is pursuing a large number of actions against Madoff net winners—known as clawbacks or avoidance actions—seeking to recover assets they received that exceeded their investments. The outcome of these actions likewise will affect liquidation payouts to Madoff customers. Finally, petitions seeking review of the appeals court’s net equity ruling have been filed with the U.S. Supreme Court. Id. at 239. In addition to the contact named above, Cody J. Goebel, Assistant Director; Rachel DeMarcus; Dean P. Gudicello; Daniel S. Kaneshiro; Jonathan M. Kucskar; Marc W. Molino, Barbara M. Roesmann; and Christopher H. Schmitt made major contributions to this report.
With the collapse of Bernard L. Madoff Investment Securities, LLC—a broker-dealer and investment advisory firm with thousands of clients—Bernard Madoff admitted to reporting $57.2 billion in fictitious customer holdings. The Securities Investor Protection Corporation (SIPC), which oversees a fund providing up to $500,000 of protection to qualifying individual customers of failed securities firms, selected a trustee to liquidate the Madoff firm and recover assets for its investors. The method the Trustee is using to determine how much a customer filing a claim could be eligible to recover—an amount known as “net equity”—has been the subject of dispute and litigation. This report discusses (1) how the Trustee and trustee’s counsel were selected, (2) why the method for valuing customer claims was chosen, (3) costs of the liquidation, and (4) disclosures the Trustee has made about its progress. GAO examined the Securities Investor Protection Act; court filings and decisions; and SIPC, Securities and Exchange Commission (SEC), and Trustee reports and records. GAO analyzed cost filings and interviewed SIPC, SEC, and SEC Inspector General officials, and the Trustee and his counsel. The Securities Investor Protection Corporation (SIPC) generally followed its past practices in selecting the trustee for the Madoff liquidation. SIPC maintains a file of trustee candidates from across the country, but given the anticipated complexities of the case, officials said the field of potential qualified trustees was limited. SIPC has sole discretion to appoint trustees and, wanting to act quickly, SIPC senior management considered four trustee candidates. After three of the four candidates were eliminated for reasons including having a conflict of interest or ongoing work on a large financial firm failure, SIPC selected Irving H. Picard, who has considerable securities and trustee experience. However, SIPC has not documented a formal outreach procedure for identifying candidates for trustee and trustee’s counsel, or documented its procedures and criteria for selecting persons for particular cases, as internal control standards recommend. Having such documented procedures could allow SIPC to better assess whether it has identified an optimal pool of candidates, and to enhance the transparency of its selection decisions. A key goal of broker-dealer liquidations is to provide customers with the securities or cash they had in their accounts. However, because the Trustee determined that amounts shown on Madoff customers’ statements reflected years of fictitious investments and profits, he chose to determine customers’ net equity using the “net investment method” (NIM), which values customer claims based on amounts invested, less amounts withdrawn. SIPC senior management and officials of the Securities and Exchange Commission (SEC)—which oversees SIPC—initially agreed on the appropriateness of NIM. Over the course of 2009, however, SEC officials continued to consider alternative approaches for reimbursing customers. Although some customers have challenged the Trustee’s use of NIM, two courts have held that the Trustee’s approach is consistent with the law and with past cases, with both courts indicating that using the values shown on customers’ final statements would effectively sanction the Madoff fraud and produce “absurd” results. In November 2009, SEC commissioners voted to support the use of NIM, but with an adjustment for inflation, in an approach known as the “constant dollar” method. However, after an SEC official’s conflict of interest was made public in February 2011, the SEC Chairman directed SEC staff to review whether the commission should revote on the constant dollar approach. The matter is currently pending. As of October 2011, costs of the Madoff liquidation reached more than $450 million, and the Trustee estimates the total costs will exceed $1 billion by 2014. Legal costs, which include costs for the Trustee and the trustee’s counsel, are the largest category. While the estimated total cost for the Madoff liquidation is double the total for all completed SIPC cases to date, the Trustee, SIPC, and SEC note that the costs reflect the unprecedented size, duration, and complexity of the Madoff fraud. SIPC senior management also said the liquidation costs are justified, as litigation the trustee has pursued has produced $8.7 billion in recoveries for customers to date. Through various reports, court filings, and a website, the Trustee has disclosed information about the status of the liquidation. SIPC senior management, SEC officials, and the U.S. Bankruptcy Court have concluded that the Trustee’s disclosures sufficiently address the requirements for disclosure under the Bankruptcy Code and the Securities Investor Protection Act. SEC should advise SIPC to (1) document its procedures for identifying candidates for trustee or trustee’s counsel, and in so doing, to assess whether additional outreach efforts should be incorporated, and (2) document a process and criteria for appointment of a trustee and trustee’s counsel. SEC and SIPC concurred with our recommendations.
According to NTSB’s aviation accident database, from 1998 to 2009 one large commercial airplane was involved in a nonfatal accident after encountering icing conditions during flight and five large commercial airplanes were involved in nonfatal accidents related to snow or ice on runways. Although there have been few accidents, FAA and others recognize that incidents are potential precursors to accidents. Data on hundreds of incidents that occurred during this period reveal that icing and contaminated runways pose substantial risk to aviation safety. FAA’s database of incidents includes 200 icing-related incidents involving large commercial airplanes that occurred from 1998 through 2007. These data covered a broad set of events, such as the collision of two airplanes at an ice-covered gate, and an airplane that hit the right main gear against the runway and scraped the left wing down the runway for about 63 feet while attempting to land with ice accumulation on the aircraft. During this same time period, NASA’s Aviation Safety Reporting System (ASRS) received over 600 icing and winter weather-related incident involving large commercial airplanes. These incidents reveal a variety of safety issues such as runways contaminated by snow or ice, ground deicing problems, and in-flight icing encounters. This suggests that risks from icing and other winter weather operating conditions may be greater than indicated by NTSB’s accident database and by FAA’s incident database. FAA officials point out that there is no defined reporting threshold for ASRS reports and because they are developed from personal narrative, they can be subjective. However, these officials agree that the ASRS events must be thoroughly reviewed and evaluated for content to determine the relevancy to icing and the extent and severity of the safety issue. The contents of the ASRS data system also demonstrate the importance of aggregating data from all available sources to understand a safety concern. See table 1 for the number of icing and winter weather-related incident reports from ASRS for large commercial airplanes. While this testimony focuses on large commercial airplanes, I would like to note that from 1998 to 2007, small commercial airplanes and noncommercial airplanes experienced more icing-related accidents and fatalities than did large commercial airplanes, as shown in table 2. This is largely because, compared to large commercial airplanes, small commercial airplanes and noncommercial airplanes (1) operate at lower altitudes that have more frequent icing conditions, (2) have a higher icing collection efficiency due to their smaller scale, (3) are more greatly impacted by ice as a result of their smaller scale, (4) tend to have deicing equipment rather than fully evaporative anti-icing equipment, (5) may not have ice protection systems that are certified, nor are they required to be, because the airplane is not approved for flight in known icing conditions, and (6) may not have ice protections systems installed. Following the 1994 fatal crash of American Eagle Flight 4184 in Roselawn, Indiana, FAA issued a multiyear plan in 1997for improving the safety of aircraft operating in icing conditions and created a steering committee to monitor the progress of the planned activities. Over the last decade, FAA made progress on the implementation of the objectives specified in its multiyear plan by issuing or amending regulations, airworthiness directives (ADs), and voluntary guidance to provide icing-related safety oversight. For example, FAA issued three final rules on icing: in August 2007, a rule introduced new airworthiness standards to establish comprehensive requirements for the performance and handling characteristics of transport category airplanes in icing conditions; in August 2009, a rule required a means to ensure timely activation of the ice protection system on transport category airplanes; and in December 2009, a rule required pilots to ensure that the wings of their aircraft are free of polished frost. FAA has also proposed an icing-related rule in November 2009, on which the public comment period closed February 22, 2010; this rule would require the timely activation of ice protection equipment on commercial aircraft during icing conditions and weather conditions conducive to ice formation on the aircraft. In addition, FAA is developing a proposed rule to amend its standards for transport category airplanes to address supercooled large drop icing, which is outside the range of icing conditions covered by the current standards. Since 1997, FAA has issued over 100 ADs to address icing safety issues involving more than 50 specific types of aircraft, including ADs that required the installation of new software on certain aircraft and another that required operators and manufactures to install placards displaying procedures for use of an anti- icing switch on certain aircraft. Additionally, FAA has issued bulletins and alerts to operators emphasizing icing safety issues. As part of our ongoing review, we will conduct a more comprehensive evaluation of FAA’s progress on the implementation of the objectives specified in its multiyear in-flight icing plan. Among other things, we will also analyze the results of FAA’s surveillance activities related to monitoring air carriers’ compliance with existing regulations and ADs. FAA also provided funding for a variety of icing-related purposes. For example, FAA has supported NASA research related to severe icing conditions and the National Center for Atmospheric Research (NCAR) research related to weather and aircraft icing. Furthermore, FAA has provided almost $200 million to airports through the Airport Improvement Program (AIP) to construct deicing facilities and to acquire aircraft deicing equipment from 1999 to 2009. See appendix I for a detailed listing of AIP icing-related funding by state, city, and year. Runway safety is a key concern for aviation safety and especially critical during winter weather operations. For example, in December 2005, a passenger jet landed on a snowy runway at Chicago’s Midway Airport, rolled through an airport perimeter fence onto an adjacent roadway, and struck an automobile, killing a child and injuring 4 other occupants of the automobile and 18 airline passengers. According to the Flight Safety Foundation, from 1995 through 2008, 30 percent of global aviation accidents were runway-related and “ineffective braking/runway contamination” is the fourth largest causal factor in runway excursions that occur during landing. In fiscal year 2000, FAA’s Office of Airport Safety and Standards initiated a program, which includes making funds available to airports through AIP, to accelerate improvements in runway safety areas at commercial service airports that did not meet FAA design standards. EMAS uses materials of closely controlled strength and density placed at the end of the runway to stop or greatly slow an aircraft that overruns the runway. According to FAA, the best material found to date is a lightweight crushable concrete. 2010. To date there have been five successful EMAS captures of overrunning aircraft. Government and industry stakeholders, external to FAA, also contribute to the effort to increase aviation safety in winter weather/icing conditions. For example, NTSB investigates and reports on civil aviation accidents and issues safety recommendations to FAA and others, some of which it deems most critical and places on a list of “Most Wanted” recommendations. Since 1996, NTSB has issued 82 recommendations to FAA aimed at reducing risks from in-flight structural icing, engine and aircraft component icing, runway condition and contamination, ground icing, and winter weather operations. NTSB’s icing-related recommendations to FAA have called for FAA to, among other things, strengthen its requirements for certifying aircraft for flying in icing conditions, sponsor the development of weather forecasts that define locations with icing conditions, and enhance its training requirements for pilots. NTSB has closed 39 of these recommendations (48 percent) as having been implemented by FAA, and has classified another 25 (30 percent) as FAA having made acceptable progress. This combined 78 percent acceptance rate is similar to the rate for all of NTSB’s aviation recommendations. For more than 30 years, NASA has conducted and sponsored fundamental and applied research related to icing. The research addresses icing causes, effects, and mitigations. For instance, NASA has conducted extensive research to characterize and simulate supercooled large drop icing conditions to inform a pending FAA rule related to the topic. NASA participated in research activities, partially funded by FAA, that developed additional knowledge and strategies which allowed forecasters to more precisely locate supercooled large drop icing conditions. Furthermore, NASA has an icing program, focused generally on research related to the effects of in-flight icing on airframes and engines for many types of flight vehicles. NASA has developed icing simulation capabilities that allow researchers, manufacturers, and certification authorities to better understand the growth and effects of ice on aircraft surfaces. NASA also produced a set of training materials for pilots operating in winter weather conditions. In recent years, NASA’s funding decreased significantly, limiting the capability of its icing research program. NOAA, the National Weather Service (NWS), and NCAR have efforts directed and funded by FAA related to predicting the location and severity of icing occurrences. NWS operates icing prediction systems and NCAR conducts research to determine more efficient methods to complete this task. For example, in 2006, NCAR introduced a new Web-based icing forecast tool that allows meteorologists and airline dispatchers to warn pilots about icing hazards up to 12 hours in advance. NCAR developed this tool using FAA funding and NWS facilitates the operation of the new icing forecasting tool. NWS also posts on the agency’s Web site maps of current icing conditions, pilot reports, forecasts, and freezing level graphics. The private sector has also contributed to efforts to prevent accidents and incidents related to icing and winter weather conditions. For example, as shown in figure 2, aircraft manufacturers have deployed various technologies such as wing deicers, anti-icing systems, and heated wings. In addition, airports operate ground deicing and runway clearing programs that help ensure clean wings (see fig. 3) and runways. While critical to safe, efficient winter operations, these programs involve treating aircraft and airport pavement with millions of pounds of deicing and anti-icing compounds annually. According to the Environmental Protection Agency, these compounds contain chemicals that can harm the environment. Some airports can control deicing pollution by capturing the fluids used to deice aircraft using technologies such as AIP-funded deicing pads, where aircraft are sprayed with deicing fluids before takeoff and the fluids are captured and treated; drainage collection systems; or vacuum-equipped vehicles. Third-party contractors, rather than individual air carriers, are increasingly performing deicing operations at commercial airports. FAA does not currently have a process to directly oversee these third-party contractors but indicates that it has one under development. While FAA and others are undertaking efforts to mitigate the risks of aircraft icing and winter weather operations, through our interviews and discussions with government and industry stakeholders, we have identified challenges related to these risks that, if addressed by ongoing or planned efforts, could improve aviation safety. These challenges include (1) improving the timeliness of FAA’s winter weather rulemaking efforts, (2) ensuring the availability of adequate resources for icing-related research and development (R&D), (3) ensuring that pilot training is thorough, relevant, and realistic, (4) ensuring the collection and distribution of timely and accurate weather information, and (5) developing a more integrated approach to effectively manage winter operations. Improving the timeliness of FAA’s winter weather rulemaking efforts. FAA’s rulemaking, like that of other federal agencies, is a complicated, multistep process that can take many years. Nonetheless, NTSB, FAA, and we have previously expressed concerns about the efficiency and timeliness of FAA’s rulemaking efforts. In 2001, we reported that a major reform effort begun by FAA in 1998 did not solve long-standing problems with its rulemaking process, as indicated both by the lack of improvement in the time required to complete the rulemaking process and by the agency’s inability to consistently meet the time frames imposed by statute or its own guidance. External pressures—such as highly-publicized accidents, recommendations by NTSB, and congressional mandates—as well as internal pressures, such as changes in management’s emphasis continued to add to and shift the agency’s priorities. For some rules, difficult policy issues continued to remain unresolved late in the process. The 2001 report contained 10 recommendations designed to improve the efficiency of FAA’s rulemaking through, among other things, (1) more timely and effective participation in decision-making and prioritization; (2) more effective use of information management systems to monitor and improve the process; and (3) the implementation of human capital strategies to measure, evaluate, and provide performance incentives for participants in the process. FAA implemented 8 of the 10 recommendations. NTSB’s February 2010 update on the status of its Most Wanted recommendations related to icing characterized FAA’s related rulemaking efforts as “unacceptably slow.” In December 2009, at FAA’s International Runway Safety Summit, NTSB’s Chairman commented, “How do safety improvements end up taking 10 years to deliver? They get delayed one day at a time . . . and every one of those days may be the day when a preventable accident occurs as the result of something we were ‘just about ready to fix.’” In particular, NTSB has expressed concern about the pace of FAA’s rulemaking project to amend its standards for transport category airplanes to address supercooled large drop icing, which is outside the range of icing conditions covered by the current standards. FAA began this rulemaking effort in 1997 in response to a recommendation made by NTSB the prior year, and the agency currently expects to issue its proposed rule in July 2010 and the final rule in January 2012. However, until the notice of proposed rulemaking is published and the close of the comment period is known, it will be unclear as to when the final rule will be issued. Much of the time on this rulemaking effort has been devoted to research and analysis aimed at understanding the atmospheric conditions that lead to supercooled large drop icing. In 2009, FAA completed an internal review of its rulemaking process that concluded that several of the concerns from 1998 that led to the agency’s major reform effort remain issues, including: inadequate early involvement of key stakeholders; inadequate early resolution of issues; inadequate selection and training of personnel involved in rulemaking; and inefficient quality guidance. According to FAA’s manager for aircraft and airport rules, the agency is taking steps to implement recommendations made by the internal review, such as revising the rulemaking project record form and enhancing training for staff involved in rulemaking. In addition, in October 2009, FAA tasked its Aviation Rulemaking Advisory Committee (ARAC) with reviewing its processes and making recommendations for improvement within a year. We believe these efforts have the potential to improve the efficiency of FAA’s rulemaking process. Recently, moreover, FAA has demonstrated a commitment to making progress on some high-priority rules that have languished for a long time. For example, FAA officials have said that they intend to expedite FAA’s rulemaking on pilot fatigue, which has been in process since 1992. The issue of insufficient rest emerged as a concern from NTSB’s investigation of the February 12, 2009, crash of Continental Connection/Colgan Air Flight 3407 near Buffalo, New York. Moreover, a capacity for progress in rulemaking will be critical because, as we have reported to this Subcommittee in our recent reviews of the transition to the Next Generation Air Transportation System (NextGen), many of the procedures that are proposed to safely enhance the efficiency and capacity of the national airspace system to address current delays and congestion in the system and to accommodate forecasted increases in air traffic will be dependent on the timely development of rules and standards. Ensuring the availability of adequate resources for icing-related R&D. NASA is a key source of R&D related to icing. The agency performs fundamental research related to icing in house and sponsors such research at universities and other organizations. According to NASA officials, possible areas for increased support for R&D that could be helpful include pilot training, supercooled large drop simulation (both experimental and computational), engine icing, and the effects of icing on future aircraft wing designs. However, the amount of NASA resources (including combined amounts of NASA’s budget and funding from FAA for aircraft icing R&D at NASA facilities) and staffing for icing research have declined significantly since fiscal year 2005, as shown in figure 4. According to NASA officials, there were several contributing factors to the decline in available resources including the fiscal constraints on the overall federal budget, a shift in the Administration’s priorities for NASA, as well as a restructuring within the NASA’s aeronautical programs to reflect the available resources and priorities. Because the outcomes of R&D are often required for the development of rules and standards, as well as for technological innovation, a decline in R&D resources can delay actions that would promote safe operation in icing conditions. In June 2008, the FAA sponsored a symposium on fatigue management that provided an opportunity for subject matter experts to come together and discuss fatigue’s effects on flight crews, maintenance personnel, and air traffic controllers. NTSB believes that fatigue management plans may hold promise as an approach to dealing with fatigue in the aviation environment. However, NTSB considers fatigue management plans to be a complement to, not a substitute for, regulations to prevent fatigue. For further information about this testimony, please contact Gerald Dillingham at (202) 512-2834. Individuals making key contributions to this testimony included Laurel Ball, Shareea Butler, Colin Fallon, David Goldstein, Brandon Haller, David Hooper, Joshua Ormond, and Sally Moino. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Ice formation on aircraft can disrupt the smooth flow of air over the wings and prevent the aircraft from taking off or decrease the pilot's ability to maintain control of the aircraft. Taxi and landing operations can also be risky in winter weather. Despite a variety of technologies designed to prevent ice from forming on planes, as well as persistent efforts by the Federal Aviation Administration (FAA) and other stakeholders to mitigate icing risks, icing remains a serious concern. As part of an ongoing review, this statement provides preliminary information on (1) the extent to which large commercial airplanes have experienced accidents and incidents related to icing and contaminated runways, (2) the efforts of FAA and aviation stakeholders to improve safety in icing and winter weather operating conditions, and (3) the challenges that continue to affect aviation safety in icing and winter weather operating conditions. GAO analyzed data obtained from FAA, the National Transportation Safety Board (NTSB), the National Aeronautics and Space Administration (NASA), and others. GAO conducted data reliability testing and determined that the data used in this report were sufficiently reliable for our purposes. Further, GAO obtained information from senior FAA and NTSB officials, representatives of the Flight Safety Foundation, and representatives of some key aviation industry stakeholder organizations. GAO provided a draft of this statement to FAA, NTSB, and NASA and incorporated their comments where appropriate. According to NTSB's aviation accident database, from 1998 to 2009 one large commercial airplane was involved in a nonfatal accident after encountering icing conditions during flight and five large commercial airplanes were involved in nonfatal accidents due to snow or ice on runways. However, FAA and others recognize that incidents are potential precursors to accidents and the many reported icing incidents suggest that these airplanes face ongoing risks from icing. For example, FAA and NASA databases contain information on over 600 icing-related incidents involving large commercial airplanes. FAA and other aviation stakeholders have undertaken many efforts to improve safety in icing conditions. For example, in 1997, FAA issued a multiyear plan for improving the safety of aircraft operating in icing conditions and has since made progress on the objectives specified in its plan by issuing regulations, airworthiness directives, and voluntary guidance, among other initiatives. Other government entities that have taken steps to increase aviation safety in icing conditions include NTSB, which has issued numerous recommendations as a result of its aviation accident investigations, and NASA, which has contributed to icing-related research. The private sector has deployed various technologies on aircraft, such as wing deicers, and operated ground deicing and runway clearing programs at airports. GAO identified challenges related to winter weather aviation operations that, if addressed by ongoing or planned efforts, could improve safety. These challenges include (1) improving the timeliness of FAA's winter weather rulemaking efforts; (2) ensuring the availability of adequate resources for icing-related research and development; (3) ensuring that pilot training is thorough and realistic; (4) ensuring the collection and distribution of accurate weather information; and (5) developing a more integrated approach to effectively manage winter operations.
The Kennedy Center, established in 1964 as both a national cultural arts center and a memorial to the 35th President, opened in September 1971. Shortly thereafter, in 1972, the Secretary of the Interior, through NPS, assumed responsibility for building maintenance, security, interpretative, janitorial, and all other services necessary for the nonperforming arts functions of the Center. The Board, however, retained responsibility for all performing arts activities. The relationship was formalized in a July 1973 agreement between NPS and the Board. In the early 1990s, the Board petitioned Congress for complete control of all facility operations at the Center. In part, the Board based its request on the difficulty encountered in managing the Center under the dual responsibility established by the July 1973 agreement. In response, Congress, in the 1994 Amendments, transferred responsibility for the operation and maintenance of the facility from NPS to the Board and authorized appropriations to be made to the Board for this purpose. With the implementation of the Amendments the Board assumed responsibility for managing the day-to-day operation and maintenance related to the performing and nonperforming arts functions as well as the long-term care of the facility. The development of an overall organizational structure for the Kennedy Center was one item we discussed in a 1972 report. In discussing the direction in which the new Center could proceed, we stated that the Center should establish an organizational structure that clearly defines and specifically assigns responsibility for performance of functions while delegating appropriate authority to perform such functions. Subsequently, in a February 1993 report we noted that the Center did not have individuals on staff with certain professional and technical skills, such as a federal contracting officer or architects and engineers that would be associated with managing capital projects. However, our report noted that, on the basis of our discussions with Center officials, there appeared to be no reason that—with sufficient time and funding—the Center could not acquire the necessary management capability. In August and September 1995, a Center consultant evaluated the Center’s operational and maintenance functions to identify strategies for improving the efficiency and effectiveness of the facility management staff and operations. The consultant’s September 1995 report noted in part that there were no clear lines of responsibility within the existing facility management structure and that job descriptions were not clearly defined. The consultant recommended that mechanisms be developed to (1) establish clear lines of responsibility and authority, (2) consolidate all services related to the maintenance of the facility under one authority, and (3) develop specific and detailed job descriptions for each position. Further, the consultant’s report also noted that “An organized system should be developed for managing information concerning the facility operations to be used to monitor performance against established standards.” The objectives of our work were to develop information on the status of the Center’s efforts to (1) define and implement facility management positions; (2) develop or procure and implement a facility management system; and (3) develop facility project and financial reports, since the 1994 transfer of facility responsibilities from NPS to the Board. Therefore, we limited our inquiries to identifying the facility management positions that were created by the Center since the 1994 transfer. We also identified the management systems and reports that support the facility management positions. We did not attempt to assess whether the persons filling facility management positions had the expertise or experience necessary for those positions or whether the definitions of the roles or responsibilities of the positions were complete. Further, we did not assess whether the management systems and reports had the capability to or were being used properly to assist facility managers. We defined the scope of facility management functions in accordance with guidelines of the International Facility Management Association. The guidelines provide that facility management coordinates the physical workplace with the people and work of the organization. In general, the scope of responsibilities can begin with the parking lot and extend to the grounds; building exterior; building systems; building services; and the layout, furniture, and furnishing of staff work space. The Association notes that 8 groups of similar activities, comprising 41 responsibilities, are commonly involved in managing facilities. The eight groups involve real estate, long-range planning, space management, interior planning, interior installations, maintenance and operations, architecture and engineering services, and budgeting. To obtain information on the development of facility management positions and their associated roles and responsibilities, we obtained Center organizational charts, discussed the roles and responsibilities of each managerial position with the current occupant, and obtained the position description. Since our objective was to provide information on the status of the creation of positions, we did not assess the appropriateness of organizational structures. To obtain information on the status of changes in the facility management systems and reports, we interviewed Center officials; reviewed documentation prepared by the Center or its vendors and obtained information on implementation schedules. To gain an understanding of the potential assistance the systems and reports could provide managers, we interviewed Center officials; reviewed contracting records and vendor materials; and reviewed documentation provided by finance and project officials, which demonstrated the types of reports that have been designed and implemented. Since our objective was to provide information only on the status of systems and reports, not the accuracy of the output of the systems, we did not test the accuracy or the completeness of the outputs we obtained. We did our work between January and December 1997 in accordance with generally accepted government auditing standards. On February 17, 1998, we provided a draft of this report to the Chairman of the Kennedy Center for review and comment. The Center’s oral comments are discussed near the end of this report. Responsibility for the various facility management functions, transferred to the Center by the Amendments, is currently delegated to six facility management positions. Officials to whom we spoke told us that they do not anticipate a need for additional facility management positions. The Center’s facility management organization includes four managerial positions that either were transferred from NPS or were created and staffed shortly after the Amendments. A synopsis of the history of each position and the positions’ roles and responsibilities is presented below. Project Executive. On December 25, 1994, the NPS architect responsible for the capital work at the Center was transferred from NPS to the Center as the Project Executive responsible for the management of the capital program in the Center’s Project Management Office. The position description for the Project Executive summarized the roles and responsibilities as including (1) directing capital repair projects; (2) managing, along with the Controller and Director of Contracting, the obligation and control of funds appropriated for the capital repair program; and (3) serving as the principal advisor to senior Center managers on matters pertaining to the capital repair program and facility improvement program planning. Director of Facilities. From September 1995 to December 1996, the Project Executive, in addition to his role as Project Executive, was also responsible for the functions of this position. Effective December 3, 1996, the role was transferred to, and became an additional responsibility of, the Director of Security. On July 20, 1997, the Director of Security, who had performed the duties of Director of Facilities in an acting capacity, was appointed the Director of Facilities while retaining the responsibilities of Director of Security. Officials to whom we spoke told us that, although one person has been given responsibility for both positions to reflect the close relationship between facilities and security needs in regard to operation of the Center, neither position had been abolished. They said that the Center could at any time appoint separate individuals to each position. The position description for the Director of Facilities summarized the roles and responsibilities as principal advisor to the Vice President of Facilities (1) on all matters pertaining to facilities and infrastructure, (2) on all matters pertaining to maintenance and operations, and (3) for the development and justification of the Center’s annual utilities budget. Additionally, the Director is responsible for managing all security, fire, and life safety matters. Director of Contracting. On February 27, 1995, the Center posted a vacancy announcement to fill the position of Contracting Officer. The position description summarized the roles and responsibilities to include (1) the head of the contracting activity for the Center; (2) responsibility for the organization and management of the Office of Procurement; (3) management and control of the Center’s appropriated fund contracting procedures; and (4) management of all aspects of the procurement cycle, including planning, negotiation and administration of construction, personal services, technical services, maintenance, supply, and related contracts in accordance with the Federal Acquisition Regulation and Center guidance. The position was filled on June 11, 1995, by a contracting officer with prior experience in federal facilities contracting. Director of Security. On November 30, 1994, the Center employed its former Secret Service liaison as the Director of Security. The position description summarized the roles and responsibilities as including (1) serving as the principal advisor to Center management on matters affecting safety and security; (2) implementing and administering procedures affecting safety and security; (3) assisting the Contracting Officer in managing the contract guard force; (4) maintaining liaison with pertinent federal and local law enforcement authorities; and (5) managing the security budget. On July 20, 1997, the Director of Security was also designated as the Director of Facilities, thus combining the roles and responsibilities of both positions. However, Center officials told us that because neither position had been abolished, but simply had been staffed by the same individual, the Center could at any time appoint separate individuals to each position. Director of Auxiliary Services. On August 7, 1996, the position of Director of Auxiliary Services was established and filled. According to Center officials, this was an area of facility management that had not previously received sufficient attention. The document appointing the director outlined the position’s responsibilities as including consolidating responsibility for and overseeing the Center’s concessionaire operations, such as the parking contractor, shuttle bus service arrangements, and taxi dispatching services. Subsequently, the job description covering this position was expanded to include liaison with the contracted restaurant service. Vice President for Facilities. On September 27, 1996, the President of the Center announced the creation of a senior management structure that included a Vice President for Facilities. The announcement creating the Vice President for Facilities highlighted the importance of the new position to the Center’s strategic plans during the next few years. According to the announcement, among the responsibilities assigned to the new position were those of overseeing all appropriated funds operations and working with other departments to implement a host of new facility related initiatives, including the expansion of the parking garage and the large-screen format theater. Further, the announcement established a formal management reporting structure in which the incumbents in the key facility management positions report to the Vice President for Facilities. The Center managers, including the Board, determined that the Center’s facility management program would be operated by a few managers supported by a small in-house staff and contractor technical staff. As a result, the Center relies on contractor employees for technical facility management expertise. To develop information on the Center’s use of contractor employees, we focused our inquiries on the contractor technical support that the Project Executive employs in managing the capital improvement program. Briefly, the management support supplied by contract employees includes the following: Management of construction work. In 1995, the Center entered into a Memorandum of Agreement with the U.S. Army Corps of Engineers - Baltimore District, for technical assistance, including architect-engineer contract management, project design reviews, awarding and managing construction contract(s), contract reviews for legal sufficiency, and other related services. Project design services. The Center currently retains the services of architect-engineer firms through a source selection panel process to prequalify firms. Prequalified firms are awarded a 5-year contract with a small monetary guarantee and placed on the contractor prequalified list. From this list, prequalified firms may be selected for engagements for new design work or to do design reviews of work done by others. Both types of work can be awarded to a firm under a task order issued under the 5-year contract. Other design work. The Center plans to continue employing the services of NPS under an existing Interagency Agreement, dated September 23, 1994. The scope of this work may include landscape design and site planning or work requiring previous experience at the Center. Administrative support. The Center has entered into a Memorandum of Understanding and Agreement with the General Services Administration under which the Center receives accounting services including, in part, accounting and financial reporting. In addition to the six facilities management positions, several committees that relate to facility management assist managers in establishing facility policy, coordinating facility work with other Center activities, and managing the day-to-day contractual aspects of facilities projects. Meeting on a regular basis, the committee’s membership ranges from Board members and senior management at the Operations Committee level to contract managers and technical support contractors at the Construction Coordinating Committee level. The committees include the following: Operations Committee. The Operations Committee, a committee of the Board, involves Board members and senior staff in quarterly briefings and presentations on policy issues or operating problems. In this regard, we were advised that the committee provides policy guidance, resolves the most serious issues requiring Board input, and functions as the Board’s eyes and ears in Center operations. Architectural Review Committee. This committee, also a committee of the Board, was previously referred to as the Fine Arts Review Committee. The committee reviews and provides guidance to staff on detailed aspects of project design and construction work, thus, according to officials, acting as the Board’s project oversight mechanism between Operations Committee meetings. The Committee recently focused on the Concert Hall renovation project, and officials told us that they expect the Committee to provide similar guidance on future projects. Vice Presidents’ Committee. The members of this committee are the Center’s president and six vice presidents. The meetings of these senior managers serve as the mechanism for elevating problems and policy questions concerning facility work to the attention of the president. Building Operations Coordinating Committee. This committee, headed by the Vice President for Facilities, focuses on facility projects’ progress, schedules, problems, or open issues requiring the vice president’s input or decision. The committee, which generally meets biweekly, except weekly in response to issues such as budget preparation, includes facility managers and others, such as the Director of Production, whose responsibilities may affect or be affected by facility projects work. Construction Coordination Committee. This committee, under the Project Executive, includes Center project staff and, as needed, consultants and technical support staff in weekly meetings focused on resolving detailed issues involving progress and problems with contracts, scheduling, or future work. The Center has purchased and is currently implementing a CIFM system. Further, the Center developed in-house, and has implemented, a number of project status and financial tracking reports. On August 9, 1996, after announcing its intent in the Commerce Business Daily, the Center purchased a system to assist in managing the facility. The vendor’s literature for the CIFM system procured described a system directed toward control and management of an organization’s resources, including real estate, equipment, personnel, space, leases, maintenance, cabling, and project budgeting. The system has nine modules, each focused on one aspect of facility management, and affords managers the opportunity to produce various analyses of operations. Center officials provided us with the schedule for implementing the software modules listed in table 1. The officials advised us that they have focused implementation efforts on the Property Portfolio, Asset Manager, Maintenance Manager, and Preventive Maintenance modules. The information provided to us indicated that the first three modules are operational, with the Preventive Maintenance module anticipated to be operational by the end of the first quarter of 1998. Regarding the remaining modules, officials advised us that they have no current time frame for implementing them since they first focused on those modules that most affected the day-to-day operation and maintenance of the facility. In a related facility management matter, officials advised us that they have evaluated scheduling software for use in preparing a comprehensive facility utilization schedule/calendar. The schedule/calendar would include utilization of various segments of the facility, by a number of Center departments, and would reflect requirements of special events, theater events, rehearsal usage, meeting room reservations, public space usage, and temporary storage. The officials expect to have the software in use during the second quarter of 1998. Management Reports Developed In-house Designed To Track Use Of Appropriated Funds. The Center’s Project Management Office (PMO), Contracting Office, and Finance Office share responsibility for managing the funds appropriated for capital improvements. The PMO officials to whom we spoke provided us a list of the 11 management reports that have been developed and implemented to facilitate the tracking of federal funds. The following are examples of the reports and their purpose. Monthly Requisition Summary Report. This report is to capture all types of PMO contracting and purchasing information, including the purchase order number, requisition and obligation amounts, vendor, type of goods/services, and budget/function code. Since all types of contracting and purchasing actions are to be captured, the report includes administrative costs, construction contracts, design and consulting contracts, as well as the costs for contract staff, such as the Corps of Engineers. Architect/Engineer Contract Summary Report. This report is to present the entire history of a particular contractual relationship with the Center. The information contained in the report includes the contract start and end dates, the contract purpose, the original contract amount, any adjustments to the original amount, and the current adjusted amount of the contract. Payment Recommendation and Approval Report. This report is to present information required for processing and approving a contract payment request. The report also is to provide a contract’s historical payment record as well as the percentage of the contract work completed as of the last payment. We provided copies of a draft of this report to the Chairman, John F. Kennedy Center, for comment. On March 10, 1998, the Center’s Vice President for Facilities provided us with oral comments on the draft report. The Vice President advised us that the Center generally agreed with the information in the report. The Vice President also provided comments to clarify some of the information presented in the report, which we have incorporated where appropriate. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Subcommittee on Transportation and Infrastructure, and the House Committee on Transportation and Infrastructure, and the Chairman of the John F. Kennedy Center for the Performing Arts. Copies will be made available to others upon request. Major contributors to this report are listed in the appendix. If you have any questions about the report, please call me on (202) 512-8387. Ronald King, Assistant Director, Governmentwide Facility Management Issues Thomas Johnson, Evaluator-in-Charge Hazel Bailey, Communications Analyst John Parulis, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the status of the John F. Kennedy Center for the Performing Arts' efforts to define and implement: (1) facility management positions; (2) a facility management system; and (3) facility project and financial reports. GAO noted that: (1) the Center managers, including the Board of Trustees, determined that the Center's facility management program would be operated by a few managers supported by a small in-house staff and contractor technical staff; (2) currently, six facility-related managerial positions have been established and, according to Center officials, they do not anticipate a need for additional positions in the future; (3) the six managerial positions include the Vice President for Facilities; the Directors of Contracting, Facilities, Security, and Auxiliary Services; and the Project Executive; (4) all but the Vice President for Facilities and the Director of Auxiliary Services positions were established in 1994 and 1995; (5) in August and September 1996, the Center created the Director of Auxiliary Services and Vice President for Facilities positions, respectively; (6) with the exception of the Vice President for Facilities, the managers in these positions use contractors to either support operations that they are responsible for, such as parking, or provide management support; (7) in the latter instance, the Project Executive--in so far as major construction projects are underway--uses contracted technical management expertise, particularly for project planning, design, construction, and construction management; (8) several committees have been established to assist in coordinating facility operations with performing arts schedules and to provide a forum for decisionmaking; (9) these committees bring together managers and staff from throughout the Center; (10) the committees consider a range of facility issues and problems, varying from those associated with the day-to-day execution of construction contracts to resolving policy level issues such as the approval of the appropriated funds budget; (11) to provide facility operating information to key managers, the Center has purchased and is implementing a computer-integrated facility management (CIFM) system; (12) to date, the Center is progressing with implementation of four module: Property Portfolio, Asset Manager, Maintenance Manager, and Preventative Maintenance; (13) GAO did not evaluate the usefulness of the system or the output that managers obtain because of the recent and ongoing implementation of the system; (14) in addition to the CIFM system, the Center staff developed 11 reports for use in managing appropriated funds; and (15) these reports are to provide managers with information for tracking items such as appropriated funds usage, contractor progress on work, and contract payment approvals.
In recent years, reservists have regularly been called on to augment the capabilities of the active-duty forces. The Army is increasingly relying on its reserve forces to provide assistance with military conflicts and peacekeeping missions. As of April 2003, approximately 148,000 reservists from the Army National Guard and the U.S. Army Reserve were mobilized to active duty positions. In addition, other reservists are serving throughout the world in peacekeeping missions in the Balkans, Africa, Latin America, and the Pacific Rim. The involvement of reservists in military operations of all sizes, from small humanitarian missions to major theater wars, will likely continue under the military’s current war fighting strategy and its peacetime support operations. The Army has designated some Army National Guard and U.S. Army Reserve units and individuals as early-deploying reservists to ensure that forces are available to respond rapidly to an unexpected event or for any other need. Usually, those designated as early-deploying reservists would be the first troops mobilized if two major ground wars were underway concurrently. The units and individual reservists designated as early- deploying reservists change as the missions or war plans change. The Army estimates that of its 560,000 reservists, approximately 90,000 are reservists who have been individually categorized as early-deploying reservists or are reservists who are assigned to Army National Guard and U.S. Army Reserve units that have been designated as early-deploying units. The Army must comply with the following six statutory requirements that are designed to help ensure the medical and dental readiness of its early- deploying reservists. All reservists including early-deployers are required to have a 5-year physical examination, and complete an annual certificate of physical condition. All early-deploying reservists are also required to have a biennial physical examination if over age 40, an annual medical screening, an annual dental screening, and dental treatment. Army regulations state that the 5- and 2-year physical examinations are designed to provide the information needed to identify health risks, suggest lifestyle modifications, and initiate treatment of illnesses. While the two examinations are similar, the biennial examination for early- deploying reservists over age 40 contains additional age-specific screenings such as a prostate examination, a prostate-specific antigen test, and a fasting lipid profile that includes testing for total cholesterol, low- density lipoproteins, and high-density lipoproteins. The Army pays for these examinations. The examinations are also used to assign early-deploying reservists a physical profile rating, ranging from P1 to P4, in six assessment areas: (a) Physical capacity, (b) Upper extremities, (c) Lower extremities, (d) Hearing-ears, (e) Vision-eyes, and (f) Psychiatric. (See app. II for the Army’s Physical Profile Rating Guide.) According to the Army, P1 represents a non-duty-limiting condition, meaning that the individual is fit for duty and possesses no physical or psychiatric impairments. P2 means a condition may exist; however, it is not duty-limiting. P3 or P4 means that the individual has a duty-limiting condition in one of the six assessment areas. P4 means the individual functions below the P3 level. A rating of either P3 or P4 puts the reservist in a nondeployable status or may result in the changing of the reservist’s job classification. Beginning in January 2003, early-deploying reservists with a permanent rating of P3 or P4 in one of the assessment areas must be evaluated by an administrative screening board—the Military Occupational Specialty/Medical Retention Board (MMRB). This evaluation determines if reservists can satisfactorily perform the physical requirements of their jobs. The MMRB recommends whether a reservist should retain a job, be reassigned, or be discharged from the military. Army regulations that implement the statutory certification requirement provide that all reservists—including early-deploying reservists—certify their physical condition annually on a two-page certification form. Army early-deploying reservists must report doctor or dentist visits since their last examination, describe current medical or dental problems, and disclose any medications they are currently taking. (See app. III for a copy of the annual medical certificate—DA Form 7349.) In addition, the Army is required to conduct an annual medical screening for all early-deploying reservists. According to Army regulations, the Army is to meet the annual medical screening requirement by reviewing the medical certificate required of each early-deploying reservist. In addition, Army early-deploying reservists are required to undergo, at the Army’s expense, an annual dental examination. The Army is also required to provide and pay for the dental treatment needed to bring an early- deploying reservist’s dental status up to deployment standards—either dental class 1 or 2. (See table 1 for a general description of each dental classification.) According to Army officials, most of the 5-year and 2-year physical examinations, the dental examinations, and the dental treatments that have been performed were administered by military medical personnel. However, beginning in March 2001, the Army started outsourcing some examinations through the Federal Strategic Healthcare Alliance (FEDS_HEAL)—an alliance of private physicians and dentists and other physicians and dentists who work for the Department of Veterans Affairs and HHS’s Division of Federal Occupational Health. FEDS_HEAL is a program that allows Army early-deploying reservists to obtain required physical and dental examinations and dental treatment from local providers. The Army contracts and pays for these examinations. About 12,000 of these providers nationwide participate in FEDS_HEAL. The Army plans to increase its reliance on FEDS_HEAL to provide physical and dental examinations, and dental treatment for early-deploying reservists. Medical experts recommend physical and dental examinations as an effective means of assessing health. For some people, the frequency and content of physical examinations vary according to the specific demands of their job. Because Army early-deploying reservists need to be healthy to fulfill their professional responsibilities, periodic examinations are useful for assessing whether they can perform their assigned duties. Furthermore, the estimated annual cost to conduct periodic examinations—about $140—is relatively modest compared to the thousands of dollars the Army spends for salaries and training of early- deploying reservists—an investment that may be lost if reservists can not perform their assigned duties. Physical and dental examinations are geared towards assessing and improving the overall health of the general population. The U.S. Preventive Services Task Force and many other medical organizations no longer recommend annual physical examinations for adults—preferring instead a more selective approach to detecting and preventing health problems. In 1996, the task force reported that while visits with primary care clinicians are important, performing the same interventions annually on all patients is not the most clinically effective approach to disease prevention. Consistent with its finding, the task force recommended that the frequency and content of periodic health examinations should be based on the unique health risks of individual patients. Today, many health associations and organizations are recommending periodic health examinations that incorporate age-specific screenings, such as cholesterol screenings for men (beginning at age 35) and women (beginning at age 45) every 5 years, and clinical breast examinations every 3 to 5 years for women between the ages of 19 and 39. Further, oral health care experts emphasize the importance of regular 6- to 12-month dental examinations. Both the private and public sectors have established a fixed schedule of physical examinations for certain occupations to help ensure that workers are healthy enough to meet the specific demands of their jobs. For example, the Federal Aviation Administration requires commercial pilots to undergo a physical examination once every 6 months. U.S. National Park Service personnel who perform physically demanding duties have a physical examination once every other year for those under age 40, and on an annual basis for those over age 40. Additionally, guidelines published by the National Fire Protection Association recommend that firefighters have an annual physical examination regardless of age. In the case of Army early-deploying reservists, the goal of the physical and dental examinations is to help ensure that the reservists are fit enough to be deployed rapidly and perform their assigned jobs. Furthermore, the Army recognizes that some jobs are more demanding than others and require more frequent examinations. For example, the Army requires that aviators undergo a physical examination once a year, while marine divers and parachutists have physical examinations once every 3 years. While governing statutes and regulations require physical examinations at specific intervals, the Army has raised concerns about the appropriate frequency for them. In a 1999 report to the Congress, the Offices of the Assistant Secretaries of Defense for Health Affairs and Reserve Affairs stated that while there were no data to support the benefits of conducting periodic physical examinations, DOD was reluctant to recommend a change to the statutory requirements. The report stated that additional research was needed to identify and develop a more cost-effective, focused health assessment tool for use in conducting physical examinations for reservists—in order to ensure the medical readiness of reserve forces. However, as of February 2003, DOD had not conducted this research. For its early-deploying reservists, the Army conducts and pays for physical and dental examinations and selected dental treatments at military treatment facilities or pays civilian physicians and dentists to provide these services. The Army could not provide us with information on the cost to provide these services at military hospitals or clinics primarily because it does not have a cost accounting system that records or generates cost data for each patient. However, the Army was able to provide us with information on the amount it pays civilian providers for these examinations under the FEDS_HEAL program. Using FEDS_HEAL contract cost information, we estimate the average cost of the examinations to be about $140 per early-deploying reservist per year. We developed the estimate over one 5-year period by calculating the annual cost for those early-deploying reservists requiring a physical examination once every 5 years, calculating the cost for those requiring a physical examination once every 2 years, and calculating the cost for those requiring an initial dental examination and subsequent yearly dental examinations. The FEDS_HEAL cost for each physical examination for those under 40 is about $291, and for those over 40 is about $370. The Army estimates that the cost of annual dental examinations under the program to be about $80 for new patients and $40 for returning patients. The Army estimates that it would cost from $400 to $900 per reservist to bring those who need treatment from dental class 3 to dental class 2. For the Army, there is likely value in conducting periodic examinations because the average cost to provide physical and dental examinations per early-deploying reservist—about $140 annually over a 5-year period—is relatively low compared to the potential benefits associated with such examinations. These examinations could help protect the Army’s investment in its early-deploying reservists by increasing the likelihood that more reservists will be deployable. This likelihood is increased when the Army uses examinations to identify early-deploying reservists who do not meet the Army’s health standards and are thus not fit for duty. The Army can then intervene by treating, reassigning, or dismissing these reservists with duty-limiting conditions—before their mobilization and before the Army needs to rely on the reservists’ skills or occupations. Furthermore, by identifying duty-limiting conditions or the risks for developing them, periodic examinations give early-deploying reservists the opportunity to seek medical care for their conditions—prior to mobilization. Periodic examinations may provide another benefit to the Army. If the Army does not know the health condition of its early-deploying reservists, and if it expects some of them to be unfit and incapable of performing their duties, the Army may be required to maintain a larger number of reservists than it would otherwise need in order to fulfill its military and humanitarian missions. While data are not available to estimate these benefits, the benefit associated with reducing the number of reservists the Army needs to maintain for any given objective could be large enough to more than offset the cost of the examinations and treatments. The proportion of reservists whom the Army maintains but who cannot be deployed because of their health may be significant. For instance, according to a 1998 U.S. Army Medical Command study, a “significant number” of Army reservists could not be deployed for medical reasons during mobilization for the Persian Gulf War (1990-1991). Further, according to a study by the Tri-Service Center for Oral Health Studies at the Uniformed Services University of the Health Sciences, an estimated 25 percent of Army reservists who were mobilized in response to the events of September 11, 2001, were in dental class 3 and were thus undeployable. In fact, our analysis of the available current dental examinations at the seven early-deploying units showed a similar percentage of reservists—22 percent—who were in dental class 3. With each undeployable reservist, the Army loses, at least temporarily, a significant investment that is large compared to the cost of examining and treating these reservists. The annual salary for an Army early-deploying reservist in fiscal year 2001 ranged from $2,200 to $19,000. The Army spends additional amounts to train and equip each reservist and, in some cases, provides allowances for subsistence and housing. Additionally, for each reservist it mobilizes, the Army spends about $800. If it does not examine all of its early-deploying reservists, the Army risks losing its investment because it will train, support, and mobilize reservists who might not be deployed because of their health. The Army has not consistently carried out the requirements that early- deploying reservists undergo 5- or 2-year physical examinations, and the required dental examination. In addition, the Army has not required early- deploying reservists to complete the annual medical certificate of their health condition, which provides the basis for the required annual medical screening. Accordingly, the Army does not have current health information on early-deploying reservists. Furthermore, the Army does not have the ability to maintain information from medical and dental records and annual medical certificates at the aggregate or individual level, and therefore does not know the overall health status of its early-deploying reservists. We found that the Army has not consistently met the statutory requirements to provide early-deploying reservists physical examinations at 5- or 2-year intervals. At the seven Army early-deploying reserve units we visited, about 66 percent of the medical records were available for our review. Based on our review of these records, 13 percent of the reservists did not have a current 5-year physical examination on file. Further, the Army is also required to provide physical examinations every 2 years for Army early-deploying reservists over the age of 40. However, our review of the available records found that approximately 68 percent of early- deploying reservists over age 40 did not have a record of a current biennial examination. Army early-deploying reservists are required by statute to complete an annual medical certificate of their health status, and regulations require the Army to review the form to satisfy the annual screening requirement. In performing our review of the records on hand, we found that none of the units we visited required that its reservists complete the annual medical certificate, and consequently, none of them were available for review. Furthermore, Army officials stated that reservists at most other units have not filled out the certification form and that enforcement of this requirement was poor. The Army is also statutorily required to provide early-deploying reservists with an annual dental examination to establish whether reservists meet the dental standards for deployment. At the seven early-deploying units that we visited, we found that about 49 percent of the reservists whose records were available for review did not have a record of a current dental examination. The Army’s two automated information systems for monitoring reservists’ health do not maintain important medical and dental information for early- deploying reservists—including information on the early-deploying reservists’ overall health status, information from the annual medical certificate form, dental classifications, and the date of dental examinations. In one system, the Regional Level Application Software, the records provide information on the dates of the 5-year physical examination and the physical profile ratings. In the other system, the Medical Occupational Database System, the records provide information on HIV status, immunizations, and DNA specimens. Neither system allows the Army to review medical and dental information for entire units at an aggregate level. The Army is aware of the information shortcomings of these systems and acknowledges that having sufficient, accurate, and current information on the health status of reservists is critical for monitoring combat readiness. According to Army officials, in 2003 the Army plans to expand the Medical Occupational Database System to provide the Army with access to current, accurate, and relevant medical and dental information at the aggregate and individual levels for all of its reservists—including early-deploying reservists. According to Army officials, this information will be readily available to the U.S. Army Reserve Command. Once available, the Army can use this information to determine which early-deploying reservists meet the Army’s health care standards and are ready for deployment. Army reservists have been increasingly called upon to serve in a variety of operations, including peacekeeping missions and the current war on terrorism. Given this responsibility, periodic health examinations are important to help ensure that Army early-deploying reservists are fit for deployment and can be deployed rapidly to meet humanitarian and wartime needs. However, the Army has not fully complied with statutory requirements to assess and monitor the medical and dental status of early- deploying reservists. Consequently, the Army does not know how many of them can perform their assigned duties and are ready for deployment. The Army will realize benefits by fully complying with the statutory requirements. The information gained from periodic physical and dental examinations, coupled with age-specific screenings and information provided by early-deploying reservists on an annual basis in their medical certificates, will assist the Army in identifying potential duty-limiting medical and dental problems within its reserve forces. This information will help ensure that early-deploying reservists are ready for their deployment duties. Given the importance of maintaining a ready force, the benefits associated with the relatively low annual cost of about $140 to conduct these examinations outweighs the thousands of dollars spent in salary and training costs that are lost when an early-deploying reservist is not fit for duty. The Army’s planned expansion, in 2003, of an automated health care information system is critical for capturing the key medical and dental information needed to monitor the health status of early-deploying reservists. Once collected, the Army will have additional information to conduct the research suggested by DOD’s Offices of Health Affairs and Reserve Affairs to determine the most effective approach, which could include the frequency of physical examinations, for determining whether early-deploying reservists are healthy, can perform their assigned duties, and can be rapidly deployed. To help ensure that early-deploying reservists are healthy to carry out their duties, we recommend that the Secretary of Defense direct the Secretary of the Army to comply with existing statutory requirements to ensure that the 5-year physical examinations for early-deploying reservists under 40 and the biennial physical examinations for early-deploying reservists over 40 are current and complete, all early-deploying reservists complete their annual medical certificate of health status and that the appropriate Army personnel review the certificate, and the required dental examinations and treatments for all early-deploying reservists are complete. The Department of Defense provided written comments on a draft of this report, which are found in appendix IV. DOD concurred with the report’s recommendations. DOD raised some concerns about our evaluation. For example, DOD stated that the intermittent use of the terms “The Army,” “Reserve Component,” and “Army Reserve” would lead to a misunderstanding of the organization of Army Components. While DOD did not offer specific examples, we reviewed the draft to ensure that terms were used appropriately and did not make any changes. DOD also raised the concern that we used a very narrow subject group that may not reflect a valid representative sample and that the report findings could be incorrectly applied to the Army National Guard. As we noted in our draft report, our work was conducted at seven early deploying U.S. Army Reserve units— geographically dispersed in the states of Georgia, Maryland, and Texas— and our analysis of the information collected at these units is not projectable. Finally, DOD stated that methods for annually certifying physical conditions could also include completing the statement of physical condition that is preprinted on the Personnel Qualification Record, and that we did not consider whether such alternatives were used for certification. During our visits we reviewed the medical files at all locations, the personnel files at one location, and interviewed military personnel who were responsible for maintaining the records of early- deploying reservists at all locations. We were unable to find one annual medical certificate that was reviewed by military personnel to meet the statutory requirements. In addition, some military personnel were not aware of the requirement. We are sending copies of this report to the Secretary of Defense, appropriate congressional committees, and other interested parties. Copies will also be made available to others on request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7101. Another contact and major contributors are listed in appendix V. We reviewed statutes and Army policies and regulations governing annual medical and dental screenings, and periodic physical and dental examinations. We obtained data from the Office of the Chief, U.S. Army Reserve on the physical and dental examinations performed since 2001 on early-deploying reservists. We reviewed our past reports that addressed medical and dental examinations. We conducted site visits to seven U.S. Army Reserve Units located in Georgia, Maryland, and Texas—where we obtained and reviewed all available medical and dental records. There were 504 early-deploying reservists assigned to the seven units we visited. Medical records for 332 reservists were available for our review. Army administrators told us that the remaining files were in transit, with the reservist, or on file at another location. Our analysis of the information gathered at these units is not projectable. We did not review medical or dental records at Army National Guard units, but obtained information from the Guard on its medical policies. To calculate an average annual cost to provide physical and dental examinations for Army early-deploying reservists, we obtained estimates from the Army’s Federal Strategic Healthcare Alliance (FEDS_HEAL) administrator on the costs of outsourcing the examinations. We calculated the annual cost for those reservists requiring a physical examination once every 5 years and those requiring a physical examination once every 2 years. In developing the annual cost estimate, we used DOD information on the number of Army reservists that are under 40 (approximately 75 percent), and those over 40 (approximately 25 percent). We also included the initial dental examination cost and subsequent yearly dental examination costs. All costs were averaged over one 5-year period. The average annual cost does not include allowances for inflation, dental treatment, or specialized laboratory fees such as those for pregnancy, phlebotomy, and tuberculosis. We also obtained estimates of the cost to perform dental treatments from the Army Office of the Surgeon General and Army Dental Command. We obtained from DOD, HHS’s Office of Public Health and Science, the Centers for Disease Control and Prevention, medical associations, and dental associations studies and information concerning the advisability of periodic physical and dental examinations. From these organizations we also obtained published common practices and standards concerning periodic medical and dental examinations, age and risk factors, and the value and relevance of patients’ self-reporting of symptoms. Upper extremities Strength, range of motion, and general efficiency of upper arm, shoulder girdle, and upper back, including cervical and thoracic vertebrae. Lower extremities Strength, range of movement, and efficiency of feet, legs, lower back, and pelvic girdle. Hearing-ears Auditory sensitivity and organic disease of the ears. Vision-eyes Visual acuity and organic disease of the eyes and lids. No loss of digits or limitation of motion; no demonstrable abnormality; able to do hand-to- hand fighting. No loss of digits or limitation of motion; no demonstrable abnormality; able to perform long marches, stand over long periods, and run. Audiometer average level for each ear not more than 25 dB at 500, 1000, or 2000 Hz with no individual level greater than 30 dB. Not over 45 dB at 4000 Hz. Uncorrected vision acuity 20/200 correctable to 20/20 in each eye. Psychiatric Type, severity, and duration of the psychiatric symptoms or disorder existing at the time the profile is determined. Amount of external precipitating stress. Predispositions as determined by the basic personality makeup, intelligence, performance, and history of past psychiatric disorder impairment of functional capacity. No psychiatric pathology; may have history of transient personality disorder. Upper extremities Slightly limited mobility of joints, muscular weakness, or other musculo- skeletal defects that do not prevent hand-to- hand fighting and do not disqualify for prolonged effort. Lower extremities Slightly limited mobility of joints, muscular weakness, or other musculo- skeletal defects that do not prevent moderate marching, climbing, timed walking, or prolonged effort. Vision-eyes Distant visual acuity correctable to not worse than 20/40 and 20/70, or 20/30 and 20/100, or 20/20 and 20/400. Psychiatric May have history of recovery from an acute psychotic reaction due to external or toxic causes unrelated to alcohol or drug addiction. Defects or impairments that require significant restriction of use. Defects or impairments that require significant restriction of use. Hearing-ears Audiometer average level for each ear at 500, 1000, or 2000 Hz, not more than 30 dB, with no individual level greater than 35 dB at these frequencies, and level not more than 55 dB at 4000 Hz; or audiometer level 30 dB at 500 Hz, 25 dB at 1000 and 2000 Hz, and 35 dB at 4000 Hz in better ear. (Poorer ear may be deaf.) Speech reception threshold in best ear not greater than 30 dB HL measured with or without hearing aid, or chronic ear disease. Uncorrected distant visual acuity of any degree that is correctable to not less than 20/40 in the better eye. Functional level below P3. Functional level below P3. Functional level below P3. Functional level below P3. Satisfactory remission from an acute psychotic or neurotic episode that permits utilization under specific conditions (assignment when outpatient psychiatric treatment is available or certain duties can be avoided). Functional level below P3. The following staff members made key contributions to this report: Aditi S. Archer, Richard J. Wade, Krister P. Friday, Helen T. Desaulniers, and Mary W. Reich. Military Personnel: Preliminary Observations Related to Income, Benefits, and Employer Support for Reservists During Mobilizations. GAO-03-549T. Washington, D.C.: March 19, 2003. Defense Health Care: Most Reservists Have Civilian Health Coverage but More Assistance Is Needed When TRICARE Is Used. GAO-02-829. Washington, D.C.: September 6, 2002. Reserve Forces: DOD Actions Needed to Better Manage Relations between Reservists and Their Employers. GAO-02-608. Washington, D.C.: June 13, 2002. Department of Defense: Implications of Financial Management Issues. GAO/T-AIMD/NSIAD-00-264. Washington, D.C.: July 20, 2000. Reserve Forces: Cost, Funding, and Use of Army Reserve Components in Peacekeeping Operations. GAO/NSAID-98-190R. Washington, D.C.: May 15, 1998. Defense Health Program: Future Costs Are Likely to Be Greater than Estimated. GAO/NSIAD-97-83BR. Washington, D.C.: February 21, 1997. Wartime Medical Care: DOD Is Addressing Capability Shortfalls, but Challenges Remain. GAO/NSIAD-96-224. Washington, D.C.: September 25, 1996. Reserve Forces: DOD Policies Do Not Ensure That Personnel Meet Medical and Physical Fitness Standards. GAO/NSIAD-94-36. Washington, D.C.: March 23, 1994. Operation Desert Storm: Problems With Air Force Medical Readiness. GAO/NSIAD-94-58. Washington, D.C.: December 30, 1993. Reserve Components: Factors Related to Personnel Attrition in the Selected Reserve. GAO/NSIAD-91-135. Washington, D.C.: April 8, 1991.
During the 1990-1991 Persian Gulf War, health problems prevented the deployment of a significant number of Army reservists. To help correct this problem the Congress passed legislation that required reservists to undergo periodic physical and dental examinations. The National Defense Authorization Act for 2002 directed GAO to review the value and advisability of providing examinations. GAO also examined whether the Army is collecting and maintaining information on reservist health. GAO obtained expert opinion on the value of periodic examinations and visited seven Army reserve units to obtain information on the number of examinations that have been conducted. Medical experts recommend periodic physical and dental examinations as an effective means of assessing health. Periodic physical and dental examinations for early-deploying reservists provide a means for the Army to determine their health status. Army early-deploying reservists need to be healthy to meet the specific demands of their occupations; examinations and other health screenings can be used to identify those who cannot perform their assigned duties. Without adequate examinations, the Army may train, support, and mobilize reservists who are unfit for duty. The Army has not consistently carried out the statutory requirements for monitoring the health and dental status of Army early-deploying reservists. At the early-deploying units GAO visited, approximately 66 percent of the medical records were available for review. For example, we found that about 68 percent of the required 2-year physical examinations for those over age 40 had not been performed and that none of the annual medical certificates required of reservists were completed by reservists and reviewed by the units. The Army's automated health care information system does not contain comprehensive physical and dental information on early-deploying reservists. According to Army officials, in 2003 the Army plans to expand its system to maintain accurate and complete medical and dental information to monitor the health status of early-deploying reservists.
To meet the legislative requirements regarding independent management reviews, DOD issued guidance and instructions providing for a peer review process for services acquisitions. DOD’s guidance generally addresses requirements prescribed in the Act to develop a process to evaluate the specified contracting issues, but according to DOD officials, DOD has not yet determined how the department plans to disseminate lessons learned or track recommendations that result from the newly instituted reviews. DOD officials expect to further refine their processes, including developing a more formal means for disseminating lessons learned and tracking recommendations as DOD assesses its initial experiences with peer reviews. Through the first year of implementation, DPAP, which is responsible for conducting reviews of acquisitions over $1 billion, had conducted 29 peer reviews on 18 services acquisitions. Similarly, the military departments, which are responsible for conducting reviews of their acquisitions under $1 billion, issued guidance that provides for peer reviews at various levels within the departments based on dollar values. The military departments could not, however, determine the exact number of peer reviews conducted because of the absence of comprehensive reporting processes. Further, as peer review processes evolve, the military departments are considering ways to disseminate lessons learned and track recommendations. DPAP issued a memorandum in September 2008 establishing a peer review process to fulfill the requirement for an independent management review of contracts for services. The requirement for a peer review process was subsequently incorporated into DOD Instruction 5000.02, Operation of the Defense Acquisition System, in December 2008. The guidance states that these reviews are intended to ensure consistent and appropriate implementation of policy and regulations, improve the quality of contracting processes, and facilitate sharing best practices and lessons learned. According to DOD officials, peer reviews by design are a means of improving individual acquisitions and not necessarily a tool for strategically managing DOD’s services portfolio. Under DPAP’s guidance, peer reviews supplement its existing process to review and approve services acquisitions. Pursuant to congressional direction, DOD had previously established a management review process that was intended to ensure that DOD services acquisitions are based on clear, performance-based requirements with measurable outcomes and that acquisitions are planned and administered to achieve intended results. In these management reviews, DPAP assesses and approves the acquisition strategies submitted by the military departments or defense agencies for obtaining contractor-provided services estimated to be valued at $1 billion or more. Once the acquisition strategies are approved, DOD contracting offices may continue the acquisition process, including soliciting bids for proposed work and subsequently awarding contracts. DOD may award different contract types to acquire products and services, or issue task orders under existing contracts. In November 2009, we reported that the number of contracts and task orders issued after the acquisition strategies were approved was significant. For example, we reported that nearly 1,900 task orders were issued under the seven professional and management support services acquisitions we reviewed. DOD generally conducts peer reviews at three key points in the acquisition process prior to contract award—prior to issuance of the solicitation (phase 1), prior to request for final proposal revisions (phase 2), and prior to contract award (phase 3)—and is to conduct periodic post-award reviews (phase 4) (see fig. 1). In February 2009, DOD issued guidance that clarified the relationship between the management reviews and the peer reviews. For example, the guidance identifies specific issues to assess and the criteria for the reviewers to use during the management reviews or pre-award peer reviews. According to the guidance, some contracting issues identified in the Act, such as contract type and competition, are to be assessed during the management reviews. Conversely, other contracting issues identified in the Act, including requirements definition and the extent of the agency’s reliance on contractors to perform functions closely associated with inherently governmental functions, are to be assessed during pre-award peer reviews. The pre-award peer reviews also are to evaluate several elements of the source selection process that are not specified in the Act, such as the clarity and consistency of the documentation. Further, the guidance established review criteria for post-award reviews that address each of the contracting issues identified in the Act. For example, during post-award reviews, reviewers are to assess the extent to which the contracting office was able to achieve competition for orders and whether it was using appropriate contract types, well-defined requirements, and appropriate cost/pricing methods. According to DOD officials, in conducting these reviews, DPAP convenes a peer review team consisting of three to five members. Officials said that the teams are generally chaired by a deputy director within DPAP and include participation from senior contracting officials from the military departments and defense agencies as well as legal advisors from the Office of the Secretary of Defense’s General Counsel. The teams review acquisition documents prior to an on-site review and hold discussions with contracting officers over multiple days. Upon completion of the on-site review, peer review teams develop summary memoranda that include observations and recommendations. The February 2009 guidance indicated that DPAP is to review services acquisitions with an expected value of over $1 billion. In addition, DPAP may review acquisitions under that threshold that it has designated as special interest because of the nature or sensitivity of the services to be acquired. According to DOD officials, DPAP does not have a capability to independently identify acquisitions that will require its review, but rather relies on the military departments and defense agencies to notify DPAP of acquisitions that will exceed the threshold. DPAP officials noted that some reviews were not conducted because the military departments did not notify DPAP that a peer review was necessary. DPAP officials stated that they are currently focusing on the pre-award peer reviews and are phasing in post-award peer reviews. As of September 30, 2009, DPAP had conducted 29 peer reviews for 18 services acquisitions. Because the peer review process was only implemented in September 2008, no single acquisition has been subject to all phases of the peer review process and no acquisition has been peer reviewed in both the pre- and post-award phases. While most of the reviews have focused on proposed acquisitions for which the initial contract had not yet been awarded, DPAP has also conducted two phase 3 peer reviews for proposed task orders valued at over $1 billion that were to be issued under an existing contract that had previously been reviewed. DPAP has not yet determined if it will establish a policy for conducting peer reviews for all individual task orders over this amount in the future. For the 29 peer reviews of services acquisitions that DPAP conducted, figure 2 shows when each review occurred and the corresponding milestone. For example, DPAP conducted a phase 1 peer review prior to the issuance of the solicitation for 12 of the 18 services acquisitions. Our review of the summary memoranda of the pre-award peer reviews that DPAP conducted as of September 30, 2009, found that review teams generally documented the evaluation of the use of contracting mechanisms and, to a lesser extent, the use, management, and oversight of subcontractors. DPAP officials noted that other contracting issues may have been discussed during pre-award site visits and not included in the summary memorandum because the peer review team did not identify any concerns that warranted inclusion. Further, we found that review teams made several related recommendations, as illustrated in the following examples: One pre-award peer review team recommended that the contracting office reconsider the number of contracts that it had proposed be awarded under an acquisition. In this case, the contracting office had proposed limiting the number of contracts to three prior to knowing what proposals and business arrangements would be submitted by industry. The peer review team noted that this may unduly restrict flexibility of the military department. Further, the team was unsure if documentation to support the limitation on contract number would be sufficient to withstand a bid protest from an unsuccessful offeror. Another pre-award peer review recommended that the contracting office increase its use of subcontractors and encourage the prime contractors to establish mentor-protégé relationships with their subcontractors to bring more qualified contractors into an industry. Our review of the summary memoranda for the three post-award peer reviews conducted by DPAP found that consistent with guidance, the review teams evaluated all the contracting issues identified in the Act. All three summary memoranda listed the required contracting issues and then reported the peer review teams’ observations and recommendations for the contracting offices to consider for the acquisition, as illustrated by the following examples: One post-award peer review team recommended that the contracting officer modify the contract to include provisions requiring the contractor to provide information on pass-through charges for all future task orders issued. At the time of the peer review, the contract did not contain a clause requiring the contractor to provide such information, and therefore the government was unable to determine the extent of pass-through charges and whether they were excessive. Another post-award team recommended that the contracting office reduce the use of time-and-materials task orders. In this case, the acquisition strategy envisioned that most of the work would be performed through fixed-priced task orders; however, time-and- materials task orders accounted for 62 percent of the value of orders issued under the contract in the first 2 years of performance. While DPAP’s guidance noted that the recommendations made during peer reviews are advisory in nature, it also states that contracting offices are to document in the contract file the disposition of all pre-award peer review recommendations prior to contract award. The guidance does not address recommendations made during post-award reviews. According to DOD officials, contracting offices generally accept recommendations provided by the peer review teams. DPAP officials said that if the contracting office decides not to accept a peer review team’s recommendation, the contracting officer is expected to document the reason in the contract file and provide a copy to DPAP. In addition to providing recommendations to address potential issues in proposed acquisitions, the peer review teams have also identified some best practices. For example, in one summary memorandum the team called attention to the contracting office’s post-award performance plan for the acquisition, which specified how the office intended to evaluate and assess contract performance to maintain effective contract surveillance procedures. The team noted that the plan allowed real-time access to detailed cost performance data when combined with regular surveillance. According to officials, DOD, however, has not yet issued guidance establishing procedures to systematically track the recommendations made by peer review teams or disseminate best practices as required by the Act. DOD officials noted that to date, sharing lessons learned from peer reviews has largely occurred through word of mouth or through conferences. For example, at a December 2009 conference for senior DOD contracting officials, DPAP presented an update on its peer review process that included a discussion of lessons learned. To identify methods to better disseminate trends, lessons learned, and best practices identified during peer reviews, in August 2009 DPAP established a subcommittee within the Panel on Contracting Integrity. DPAP officials expect that the subcommittee will report on its findings in 2010. Further, an official stated that DPAP plans to consider ways to track the implementation of recommendations made during peer reviews. The September 2008 DPAP guidance required the military departments to establish their own procedures for conducting pre- and post-award peer reviews on acquisitions under $1 billion, but provided the flexibility to the services to tailor the process to best meet their needs. In response, the Air Force issued its guidance in January 2009, the Navy in March 2009, and the Army in April 2009. The military departments’ policies varied in such areas as the frequency and timing of the reviews and the organizational levels delegated responsibility for conducting the reviews. For example, the Air Force conducts up to five pre-award pre-award peer reviews whereas the Army conducts two (see fig. 3). The military departments plan to refine their policies as they gain experience with the peer review process. According to officials, both the Air Force and Army modified existing pre- award reviews to incorporate the peer review requirements. The existing reviews were mandatory steps in each department’s contract award process and, as such, focused on the proposed acquisition’s contracting approach, source selection process, and readiness to issue a contract solicitation. Air Force officials stated that the department previously had a post-award review process that focused on cost, schedule, and performance metrics, which was revised to incorporate peer review requirements. Army officials noted that the Army has focused its attention on implementing pre-award peer reviews, but has not yet established a post-award peer review process. These officials noted that the Army plans to issue guidance on conducting post-award reviews in 2010. In contrast, the Navy developed a new process, modeled on DPAP’s process, to review proposed services acquisitions. Navy officials are considering making some refinements to this process. For example, at the time of our review the Navy had not yet determined the optimal timing of its post-award peer reviews. The department was trying to determine a point at which there had been enough contract performance to evaluate the contractor while still allowing the contracting officers sufficient time to implement any peer review team recommendations prior to exercising an option year. While DPAP was not required to approve the military departments’ guidance, DPAP officials reported that the guidance issued by the military departments was consistent with the intent of the September 2008 guidance. There are differences, however, in how the military departments addressed certain issues. For example, each of the military departments delegated responsibility for conducting peer reviews to commands and organizational units within their departments based on expected acquisition value. In that regard: The Air Force delegated responsibility for conducting peer reviews to its major commands for proposed services acquisitions valued from $50 million to $1 billion. The Army delegated responsibility to the head of the contracting activity within each of its commands for conducting peer reviews for services acquisitions valued from $250 million to $1 billion. Similarly, it identified the principal assistant responsible for contracting as being responsible for conducting peer reviews for acquisitions from $50 million to $250 million. The Navy delegated responsibility to the Deputy Assistant Secretary of the Navy – Acquisition and Logistics Management (DASN-A&LM) for conducting peer reviews for acquisitions valued from $250 million to $1 billion, while individual commands are responsible for conducting reviews of acquisitions valued from $50 million to $250 million. Further, the Air Force does not require peer reviews on noncompetitive acquisitions—in other words, on contracts awarded using other than full- and-open competition. Air Force officials explained that such contracts are already reviewed under a separate process and therefore believed that an additional peer review would be unnecessary. Similarly, both the Air Force and Army allow the offices responsible for conducting reviews to waive peer reviews under certain circumstances, whereas the Navy does not provide for a waiver process. Air Force guidance allows peer reviews to be waived based on acquisition/source selection history, such as for recurring acquisitions and where there is no history of bid protests. The Army also allows peer reviews to be waived but did not specify in its guidance which acquisitions could be waived. As of September 2009, the military departments reported conducting hundreds of peer reviews for services acquisitions, but the departments do not have comprehensive processes for determining the exact number of reviews conducted. Specifically: The Navy reported that it had conducted 257 peer reviews for services acquisitions, including 5 post-award reviews. The Navy could not identify how many of the reviews conducted by the commands occurred by September 30, 2009. DASN-A&LM conducted its first 4 peer reviews on September 22, 2009. Though the Air Force did not know the specific number of peer reviews conducted, officials noted that it had conducted up to five pre- award reviews on approximately 85 services acquisitions as of September 30, 2009. Army officials stated that though commands had conducted pre-award peer reviews, an exact number of reviews could not be identified because the Army does not have a reporting process. The Army also acknowledged that it did not conduct any post-award reviews because it has not yet established a post-award peer review process. As peer review processes evolve, the military departments are considering ways to disseminate lessons learned and track recommendations. For example, Navy officials said the department is waiting to see the results of initial reviews and will then develop additional guidance to address lessons learned made during peer reviews. Army officials stated that the department plans to address recommendations and lessons learned in 2010 when it issues guidance on post-award reviews. Finally, Air Force policy requires commands to submit annual reports to the Secretary of the Air Force – Acquisition and Contracting Policy that are to include major issues identified during pre-award peer reviews and the resolutions taken. DOD’s guidance implementing a peer review process for major services acquisitions at the departmental level generally addresses the requirements prescribed by the Act. While DOD has derived benefits from these initial reviews, it has also recognized that there are issues that still need to be addressed, such as how to track recommendations and disseminate lessons learned. Further, at this stage, DOD’s focus has been on evaluating acquisition strategies and proposed contracts at the pre- award stage. DOD has conducted relatively few post-award reviews, in which DOD assesses how well it is managing the contractor’s actual performance. A key issue is whether and how to apply the peer review process to task orders through which DOD obtains much of its contractor- provided services. Few of these are large enough to reach the $1 billion DOD review threshold, but below the threshold they could be so numerous as to overtax the departments’ peer review processes. Addressing these issues, as well as those at the military department level, is important if DOD is to achieve its stated objectives for peer reviews— ensuring consistent and appropriate implementation of policy and regulations, improving the quality of contracting processes, and facilitating sharing best practices and lessons learned—on a more strategic or enterprisewide basis rather than limiting the peer reviews’ benefits to the individual acquisitions being reviewed. Although we are not making any recommendations because DOD plans to address these issues, resolving these concerns in a timely manner is essential if DOD is to maximize the benefits of the peer review process. DOD provided written comments on a draft of this report. In its comments, DOD stated that peer reviews had improved the quality of its significant business arrangements. DOD indicated that it will continue to refine its peer review process to better disseminate trends, lessons learned, and best practices that are identified during peer reviews. DOD provided a technical comment, which was incorporated into the report. DOD’s comments are reprinted in appendix II. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and interested congressional committees. The report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Section 808 of the National Defense Authorization Act for Fiscal Year 2008 (the Act) directs GAO to report on the Department of Defense’s (DOD) implementation of its guidance and implementing instructions providing for periodic management reviews of contracts for services. In response to this mandate, we (1) assessed the extent to which DOD’s guidance addressed the Act’s requirements at the department level and how the guidance was implemented and (2) determined the status of actions taken by the military departments pursuant to DOD’s guidance. To do so, we reviewed DOD’s September 2008 and February 2009 guidance issued by the Under Secretary of Defense for Acquisition, Technology and Logistics’ Office of Defense Procurement and Acquisition Policy (DPAP). We compared the guidance and instructions to the requirements stipulated in Section 808 of the Act. The September 2008 guidance indicated that peer reviews were to be conducted for both supplies and services. As the Act’s requirements were specific to services acquisitions, we limited our analysis to services. We also obtained guidance and implementing instructions issued by the Departments of the Air Force, Army, and Navy. We interviewed officials from DPAP and the Departments of the Army, Navy, and Air Force to gain further insight into how each organization developed its guidance and instructions. DOD’s September 2008 memorandum also indicated that defense agencies were required to develop their own guidance. While these were outside the scope of our review, DPAP officials indicated that 13 of 17 defense agencies that DPAP believed would be required to develop guidance had done so at the time of this review. We obtained information on the number of peer reviews on services acquisitions that DPAP and the military departments reported they had conducted as of September 30, 2009. DPAP was able to identify the number of reviews that it had conducted. We determined this information to be sufficiently reliable for the purposes of our review. The Air Force provided an approximate number of acquisitions that had been reviewed but could not identify the number of individual peer reviews conducted. The Army did not provide any information on the specific number of reviews conducted. The Navy provided information on the number of reviews it had conducted but could not specify how many had been conducted as of September 30, 2009. We could not independently verify the information provided by the military departments because of the lack of available documentation. To determine the nature of the discussions and the issues addressed during peer reviews, we obtained the summary memoranda from each of the 29 peer reviews conducted by DPAP as of September 30, 2009. These 29 memoranda represented 18 unique acquisitions, as DPAP had reviewed some acquisitions more than once. Twenty-six of the memoranda were for pre-award peer reviews and 3 were for post-award reviews. We analyzed summary memoranda from each of the 29 peer reviews to determine the topics discussed in the memoranda, focusing specifically on the contracting issues identified in the Act. We also interviewed DPAP officials who chaired or participated in these reviews to obtain their views on the peer review process. We conducted this performance audit from October 2009 through January 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Timothy DiNapoli, Assistant Director; E. Brandon Booth; Morgan Delaney Ramaker; Christopher Mulkins; Thomas Twambly; and Alyssa Weir made key contributions to this report.
The Department of Defense (DOD) is the federal government's largest purchaser of contractor-provided services, obligating more than $207 billion on services contracts in fiscal year 2009. DOD contract management has been on GAO's high-risk list since 1992, in part because of continued weaknesses in DOD's management and oversight of contracts for services. The National Defense Authorization Act for Fiscal Year 2008 directed DOD to issue guidance providing for independent management reviews for services acquisitions. The Act required that the guidance provide a means to evaluate specific contracting issues and to address other issues, including identifying procedures for tracking recommendations and disseminating lessons learned. The Act also directed GAO to report on DOD's implementation of its guidance. GAO (1) assessed the extent to which DOD's guidance addressed the Act's requirements and how the guidance was implemented and (2) determined the status of actions taken by the military departments pursuant to DOD's guidance. GAO compared DOD's guidance with the Act's requirements; obtained data on the number of reviews conducted as of September 2009; and analyzed memoranda of 29 acquisitions valued at over $1 billion. In its written comments, DOD noted it planned to refine its processes to better share the lessons learned and best practices identified during peer reviews. To meet the legislative requirement regarding independent management reviews, DOD issued guidance in September 2008 and February 2009 providing for a peer review process for services acquisitions. DOD's guidance generally addresses requirements in the Act to issue guidance designed to evaluate specified contracting issues, but according to officials, DOD has not yet determined how it plans to disseminate lessons learned or track recommendations that result from the newly instituted reviews. Under this guidance, the Office of Defense Procurement and Acquisition Policy (DPAP) is responsible for conducting pre- and post-award peer reviews for services acquisitions with an estimated value of over $1 billion. Peer review teams include senior contracting officials from the military departments and defense agencies as well as legal advisors. As of September 30, 2009, DPAP had conducted 29 reviews of 18 services acquisitions, including 3 post-award reviews. DOD has also conducted peer reviews on two task orders but has not yet determined if it will do so on individual task orders in the future. The peer review teams made a number of recommendations and identified some best practices. DOD officials expect to refine their processes, including developing a more formal means for disseminating lessons learned and tracking recommendations, as DOD assesses its initial experiences with peer reviews. Each of the military departments has issued guidance establishing peer review processes for services acquisitions valued at less than $1 billion although the guidance is still evolving. The departments' guidance identifies the offices or commands tasked with conducting peer reviews based on various dollar thresholds. The military departments reported conducting hundreds of peer reviews for services acquisitions as of September 30, 2009, but could not provide exact numbers because of the lack of comprehensive reporting processes. Further, as peer review processes evolve, the military departments are considering ways to disseminate lessons learned and track recommendations.
Authorized in 1972 under title XVI of the Social Security Act, the SSI program is administered by SSA. Until recently, SSA was an agency within HHS. Effective March 31, 1995, it became an independent agency. SSI provides cash benefits to aged, blind, or disabled individuals whose income and resources are below certain levels. Individuals seeking SSI benefits on the basis of disability must meet financial eligibility requirements and disability criteria. SSI is federally funded, and most states provide recipients a supplement. SSA determines applicants’ financial eligibility; DDS offices, which are state agencies funded and overseen by SSA, make the initial determination of applicants’ medical eligibility. In 1994, more than 6 million SSI recipients received nearly $22 billion in federal benefits and $3 billion in state benefits. The maximum federal SSI monthly benefit in 1995 is $458 for an individual and $687 for a couple if both spouses are eligible. To be eligible for SSI, individuals must be U.S. citizens or legal immigrants.Also eligible for SSI benefits are certain other immigrants, classified by public assistance programs as permanently residing in the United States under color of law (PRUCOL). Under the SSI program, the PRUCOL category includes refugees, defined by INS as people who are outside their country of nationality and unable or unwilling to return to that country because of persecution or a well-founded fear of persecution. Refugees are eligible to become lawful permanent residents after 1 year of continuous presence in the United States, and most do. Most SSI recipients are also eligible for Medicaid and food stamps. Medicaid is a federal/state matching entitlement program administered by HHS’ Health Care Financing Administration (HCFA). Medicaid provides medical assistance to low-income aged, blind, or disabled individuals; members of families with dependent children who receive benefits from the Aid to Families With Dependent Children program; and certain other children and pregnant women. The Food Stamp program, administered by the Department of Agriculture’s Food and Nutrition Service, is a federally funded entitlement program that provides food stamp coupons to low-income families. To apply for SSI disability benefits, an individual must generally file a claim, in person, by telephone, or by mail, with an SSA field office. Usually, an SSA field office claims representative interviews the claimant in person or by telephone to determine whether the claimant’s income and resources meet SSI financial eligibility criteria and to obtain information about the claimant’s disability. In the case of a non-English-speaking claimant, if the claims representative does not speak the claimant’s language, an interpreter participates in the interview. The SSA claims representative is also available to help the claimant complete the application form. If the claimant is deemed financially eligible, the SSA field office refers his or her claim to the state DDS for a medical review. DDS decides whether a claimant’s physical or mental impairment meets SSI disability criteria. To be considered disabled, a claimant must be unable to engage in any substantial gainful activity because of a physical or mental impairment that is expected to last at least 12 months or to result in death. To make a determination, DDS obtains and reviews medical evidence from health care providers who have treated the claimant. If DDS finds the medical evidence insufficient or possibly fraudulent, it orders a medical consultative examination (CE). DDS is generally responsible for ensuring that there is no language barrier between the claimant and the CE provider. If the CE provider does not speak the claimant’s language, DDS can either arrange for an interpreter or allow the claimant to use his or her own interpreter. If the claim is denied, an appeals process is available. SSA conducts a redetermination on each case periodically to ensure that recipients continue to be eligible for SSI according to financial eligibility criteria. The frequency of redeterminations varies based on anticipated changes in income and other factors; however, a redetermination is performed on every case at least once every 6 years. SSA has also been authorized to conduct periodic continuing disability reviews (CDR) to ensure that people whose medical condition has improved and who are no longer disabled leave SSI’s rolls. We previously reported that SSA had conducted relatively few CDRs for several years. In 1994, to increase the number of CDRs that SSA conducted under the SSI program (only 11,000 were conducted in 1994) the Congress instituted a requirement that SSA conduct at least 100,000 CDRs on SSI cases each year for the next 3 years, beginning with fiscal year 1996. SSA is also required to conduct CDRs on at least one-third of disabled SSI recipients who turn 18 years old in each of the next 3 years. If an SSA or DDS office suspects that a claim is fraudulent and the CE does not refute that suspicion, that claim is referred to the OIG for investigation.Generally, the function of the OIG is to work with SSA to develop evidence to establish potential violations of the Social Security Act; decide whether suspected fraud cases meet federal, state, or county guidelines for criminal or civil prosecution; and formally prepare and present cases for prosecution to the U.S. Attorney or the District Attorney. The Congress is considering legislation that could have a significant impact on both immigrants already receiving SSI benefits and those applying for SSI benefits. The House of Representatives passed H.R. 4 in 1995, which includes a provision that would generally bar legal immigrants, except for lawful permanent residents who are 75 years old or older and who have lived in the United States for at least 5 years and refugees in the country fewer than 6 years, from receiving certain welfare benefits, including SSI benefits. The Senate is considering a similar measure that would eliminate eligibility for all noncitizens except for legal immigrants who have worked in the United States long enough to qualify for Social Security disability benefits—at least 10 years—and recent refugees and veterans. Although some ineligible non-English-speaking immigrants obtain SSI benefits by using middlemen, the actual number of people who do so is unknown. During the past decade, the SSI immigrant caseload has grown dramatically, as compared with the U.S. citizen caseload. To serve those immigrants who do not speak English, interpreters were introduced to the SSI application process. By 1990, SSA was aware that some non-English-speaking applicants were using middlemen to defraud the SSI program and were collecting SSI benefits for which they were ineligible. Because SSI recipients generally remain on the rolls for a long time, the cost of a single mistake in determining eligibility is high: We estimate that one ineligible recipient could improperly receive a total of about $113,000 in federal benefits by the time he or she is 65 years old. As mentioned previously, little is known about the actual number of non-English-speaking immigrants receiving SSI as a result of fraudulent applications made with the assistance of middlemen. Most of the suspected cases of middleman fraud identified so far have been in California and Washington—about 6,500 cases. In both states, there has been a concerted effort to uncover fraudulent claims facilitated by middlemen. Washington’s intergovernmental task force on SSI middleman fraud, for instance, identified the following case: A Washington middleman who ran a business submitting fraudulent SSI claims was convicted of fraud. For a fee of between $2,000 and $3,000 from each applicant, he had provided inaccurate information on their SSA forms, coached them to feign mental impairments, and provided false translations at their medical examinations. At least 500 of the more than 1,000 immigrants he had coached qualified for benefits; as of November 1994, 95 of these recipients had received about $3.2 million in benefits. Three of the 500 have been convicted of fraud. The SSI claims of these 500 recipients, as well as other potentially fraudulent claims that have been identified, are subject to SSA reviews. SSA has begun implementing reviews of 460 suspected fraudulent claims in Washington. During the past decade, the immigrant portion of the SSI disability caseload rose much more rapidly than the U.S. citizen portion of the caseload. Between 1983 and 1993, the number of U.S. citizens receiving SSI disability benefits rose from approximately 2.3 million to 4.2 million—less than a twofold increase. In comparison, during the same period, the number of immigrants receiving SSI disability benefits rose from 45,000 to 267,000—approximately a sixfold increase. This increase is particularly dramatic when contrasted with the increase in the number of immigrants admitted annually to the United States in the past decade; that is, 628,132 were admitted in 1983, compared with 1,000,630 in 1993. The immigrant component of the SSI disability caseload is important because it is different from the rest of the caseload in one obvious, but significant, way: Many immigrants do not speak or understand English. As a result, when they apply for SSI benefits, they need someone to translate for them during their interactions with the English-language SSI system. SSA field offices often maintain interpreters on staff for the languages that are prevalent in their geographical areas, but sometimes field offices are unable to meet the need for interpreters. As a result, non-English-speaking applicants have been free to involve their own interpreters in the application process except where fraud is suspected. Many of the SSA and DDS offices we visited had recognized middleman fraud as a problem by 1990. Some middlemen were suspected of taking advantage of non-English-speaking claimants’ lack of sophistication and apprehensions about being in a new country, thus leading claimants to believe that middleman services were an essential support in navigating the SSI system. Middlemen were known to have coached claimants to feign forms of mental impairment, such as delayed stress syndrome or depression; controlled SSA interviews by answering all questions asked of claimants; prepared applications for numerous claimants using identical wording to describe the same mental impairments; and established relationships with unscrupulous doctors who helped them defraud the SSI program by submitting false medical evidence. In 1990, for example, SSA’s San Francisco regional office sent a memorandum to SSA headquarters, describing trends in disability claims involving suspected middleman fraud. The memorandum highlighted the following trends: claimants often alleged mental disorders; the same middleman represented many claimants at their SSA field office interviews and at their CEs; and the same physician provided essentially identical medical reports for many claimants. One California DDS branch office identified 176 claimants who had used the same middleman, who was suspected of routinely providing false information and coaching claimants, and the same treating doctor, who allegedly provided “interchangeable” medical reports. The result of such middleman involvement in the SSI application process is that some non-English-speaking immigrants collect SSI benefits to which they are not entitled. This situation is especially problematic because we estimate that each person collecting illegal SSI benefits costs the program thousands of dollars a year. Moreover, once claimants are accepted into the SSI program, it is likely that they will remain on the rolls for a long time. On the basis of a recent study of the duration of stay on SSI disability rolls, SSA reported that the expected mean lifetime disability stay of new SSI recipients before they reach age 65 is about 11 years. Thus, given the average federal monthly SSI benefit in December 1994 of $384, a recipient improperly admitted to the program could collect about $51,000 in SSI benefits to which he or she was not entitled. Moreover, the cost to the government could be higher than just the SSI payments, because in most states, Medicaid benefits and food stamps are automatically provided to SSI recipients. As a result, the recipient could improperly receive total federal benefits worth about $113,000. There are various reasons for which SSI is vulnerable to fraudulent applications when middlemen are involved. First, some SSA management practices permit middleman involvement. In addition, SSA has a shortage of bilingual staff to handle non-English-speaking applicants. Third, unavailable documentation of applicants’ medical histories as well as translations provided by interpreters at applicants’ medical examinations make disability determinations difficult. Moreover, SSA’s monitoring of middlemen remains limited until SSA’s planned interpreter database is developed and completed, and HHS OIG investigations of cases of suspected fraud involving middlemen were hampered by a lack of resources. In addition, SSA has no formalized procedures for regularly working with state Medicaid agencies—a type of coordination that could help SSA identify cases of suspected fraud. Finally, SSA needs a more effective programwide strategy for keeping ineligible SSI applicants off the rolls. Some of SSA’s current management practices—in particular, certain provisions of SSA guidance and procedures—enable non-English-speaking applicants to use middlemen. For example, SSA guidance states that if an applicant does not have an interpreter, SSA will provide one. This practice places secondary responsibility for providing translation services on SSA field offices. The result is that SSA field offices are not generally required to use their bilingual staff for translating in interviews unless an applicant does not provide his or her own interpreter. When the applicant does provide an interpreter, SSA will generally use the applicant’s interpreter as long as there is no reason to suspect that he or she is unreliable. SSA also allows applicants to use their relatives or friends as interpreters, even though unscrupulous middlemen sometimes pose as relatives or friends. Moreover, SSA’s broad definition of a qualified or reliable interpreter enables an applicant to use almost any interpreter he or she chooses. Finally, SSA procedures allow claimants to apply for SSI at any SSA field office, even though doing so enables them to abuse the system. When some middlemen or claimants learn that a certain SSA field office has staff who can speak the language of the claimant, they can go instead to a different field office, where no employees speak the language, thereby retaining control of the interview portion of the application process. SSA’s bilingual staffing problems exacerbate program vulnerabilities that arise because of some of SSA’s management practices. HHS OIG reported in 1990 that the number of bilingual SSA employees was insufficient to provide adequate service to non-English-speaking individuals. As a result, SSA has hired more bilingual staff. However, some SSA field offices remain without enough staff who can speak the languages needed. According to 1993 and 1994 SSA data, at least 45 field offices at which non-English-speaking individuals represented 10 percent or more of the workload needed additional bilingual staff. Furthermore, an SSA San Francisco regional office study of 1,198 cases from 1992 and 1993 found that when an interpreter was required, field office personnel were able to interpret in less than an estimated 5 percent of the cases when the language was other than Spanish. One California field office we visited had encountered 127 people speaking 19 languages in a single day. Because of the shortage of SSA staff who can speak the necessary languages, there may be more instances of SSI applicants using middlemen than would otherwise be necessary. SSI’s vulnerability to fraud when middlemen are used is enhanced by difficulties in obtaining adequate medical information and other kinds of information useful to the disability determination process of non-English-speaking claimants. Documentation of the individual claimant’s medical history from the claimant’s home country may be limited or nonexistent. As a result, there is little longitudinal history of the claimant’s health before his or her arrival in the United States. Furthermore, when a claimant undergoes a medical examination in the United States with a provider who does not speak his or her language, the claimant needs an interpreter. When claimants are allowed to provide their own interpreters at medical examinations, SSI becomes more vulnerable to fraud. If a middleman provides a false translation of a claimant’s symptoms or coaches the claimant on how to behave during the examination, the provider could make an incorrect medical assessment and submit inaccurate medical evidence to the state DDS. Moreover, some middlemen bring claimants to dishonest providers who are willing to submit false medical evidence to DDS. Although DDS can order a CE if the applicant’s medical information is inconclusive, the middleman may be able to manipulate this exam if the provider does not speak the applicant’s language or have his or her own translator. In addition, DDS may be hindered in collecting essential information on the claimant’s education and work experience. Taken together, these information deficits can seriously impede the DDS as it attempts to accurately assess the claimant’s ability to work. Despite recent changes in some SSA procedures, SSA’s monitoring of middlemen is limited. Although data on interpreters are being collected, they are not currently being incorporated into a central database. Rather, hard copy data are being maintained in the case files of individual claimants. SSA is beginning to design an automated system for tracking middlemen. However, it may not be completed for several years, and SSA has no interim monitoring procedures in place. As a result of congressional hearings in February 1994 and the Social Security Independence and Program Improvements Act of 1994, SSA now requires all non-SSA interpreters to complete and sign a form containing their name, address, and relationship to the applicant. These forms are maintained in applicants’ files, providing a potentially valuable body of information. But because the data collected on these forms are not being entered into an automated database, no central file exists to help SSA identify and track middlemen suspected of fraud. Thus, when an SSA field office encounters a new interpreter, it has no easy means to determine his or her reliability or whether he or she has a record with other field offices. SSA recently began developing a nationwide database of interpreter information that will identify reliable interpreters and flag middlemen who are convicted or suspected of fraud. According to SSA, this database could be operational in 1996 or 1997. But we believe it could be some time after that before users will be able to retrieve comprehensive interpreter data from this database, because SSA will probably have to compile and input considerable information, such as the signed interpreter forms previously discussed. Furthermore, work to develop the interpreter database has been somewhat slow to date, according to one SSA official, because some SSA automated systems are still being modernized. In the interim, SSA has no formal procedures in place to monitor middlemen. Two of the California field offices we visited maintained their own lists of suspect middlemen, but these lists were not being regularly shared with other SSA offices. The California DDS also maintains a list of suspect middlemen that it has submitted to the SSA regional office, but that office has not distributed the list to SSA field offices. During the last several years that HHS OIG was responsible for investigating SSI middleman fraud, it investigated very few cases. In fact, SSA field offices said they had become hesitant to forward suspect claims because of what they perceived as a lack of interest by HHS OIG. According to HHS OIG, it had too few resources to perform more SSI investigations and was concentrating its resources on cases with a larger payoff. HHS OIG, which was responsible for investigating fraudulent SSI claims until March 31, 1995, completed 10 middleman fraud investigations between 1987 and April 1995. These investigations resulted in the conviction of five middlemen. HHS OIG also participated with other federal and state investigators in some joint investigations of middleman fraud. SSA field office staff told us they had become reluctant to refer suspect claims to HHS OIG because they expected that little or no action would be taken. According to results of an informal SSA survey, in February 1994, the San Francisco regional office had referred at least 600 claims involving suspected middleman fraud to the HHS OIG, and the Seattle regional office had referred between 200 and 300. These numbers represent referrals made since October 1992. The California claims were subject to selection for the CDRs being conducted currently on potentially fraudulent cases involving middlemen. The Washington claims will be examined by the intergovernmental task force. Between 1990 and 1994, HHS OIG investigative resources declined about 17 percent—from 469 staff to 390. In 1994, the HHS Inspector General reported that a lack of resources—specifically, limited federal investigative and prosecutive resources—posed an “obstacle” to the pursuit of middleman fraud. At that time, the HHS OIG was also responsible for investigating fraud in the much larger Medicare and Medicaid programs, as well as in the SSI program. Furthermore, some threats allegedly made by middlemen on SSA field staff may have contributed to a lower number of referrals to the HHS OIG for investigation of middleman fraud. Beginning March 31, 1995, SSA has had its own OIG solely dedicated to SSA programs. SSA is adding 50 positions in fiscal year 1996 to augment the staff who transferred from the HHS OIG. One way for SSA to extend its resources would be to work more regularly with state Medicaid agencies. When one state shared information during its Medicaid fraud investigations, SSA eventually identified nearly 2,000 possibly fraudulent claims associated with illegal middleman activity. But coordination between SSA and state Medicaid agencies is not a regular practice. At the federal level, HCFA, within HHS, funds and oversees the Medicaid program. Federal law requires that a single state agency be charged with administration of the Medicaid program. Each state’s own Medicaid agency is variously situated in departments such as health, welfare, or human services. The state Medicaid agency may contract with other state entities to conduct some program functions. The state Medicaid agency is responsible for program integrity. In a case of health care provider abuse, the state Medicaid agency is authorized to take certain administrative actions. Where provider fraud is suspected, the state Medicaid agency in most states refers cases for investigation to Medicaid Fraud Control Units (MFCU). MFCUs investigate selected providers suspected of overbilling Medicaid for the services they provide to eligible patients or for billing for services that they never provided. States report the names of prosecuted or sanctioned providers to the HHS OIG so that the OIG can take appropriate action to exclude these providers from participation in other federal health programs, such as Medicare. In the course of their investigations of providers, it is possible for states to obtain information that could be useful to SSA, such as the lists of patients maintained by suspect providers, some of whom are associated with middlemen. In California, for example, an investigation initiated by the state and assisted by the HHS OIG yielded information that, when passed on to SSA, led to SSA’s identification of 1,981 SSI recipients associated with potentially fraudulent claims involving middlemen. Routine coordination of efforts with state Medicaid agencies could enhance SSA’s ability to identify potentially fraudulent SSI claims. For example, state investigative information could be helpful to SSA in meeting the 1994 congressional requirement that SSA conduct at least 100,000 SSI CDRs each year for the next 3 years, beginning in 1996. SSA could use state investigative information to help it identify high-priority cases for these CDRs. To date, however, coordination between SSA and state Medicaid agencies has been ad hoc. When SSA was part of HHS, according to SSA officials, SSA generally did not contact state Medicaid agencies on a regular basis because Medicaid fell under the administrative jurisdiction of HCFA. Consequently, SSA did not establish—and has not yet established since it became an independent agency in March 1995—formal coordination procedures for obtaining potentially helpful information from state Medicaid agencies. SSA has tried a few approaches for handling some of the individual factors that contribute to SSI’s vulnerability to fraud, but needs to develop and implement a more comprehensive, programwide strategy for ensuring that only eligible applicants receive SSI benefits. For example, one SSA approach for limiting the extent to which non-English-speaking applicants could use middlemen was to disseminate its definition of a qualified interpreter to all field staff. Furthermore, SSA disseminated a program circular in May 1995 to clarify procedures for conducting interviews with non-English-speaking claimants. In addition, SSA’s approach to the bilingual staffing shortage has been to encourage field offices to hire more staff, although, according to SSA, this has been difficult for field offices to do because of recent constraints on hiring. Moreover, SSA’s plan for tracking fraudulent middlemen may not be fully implemented for several years; its OIG needs more resources to perform investigations; and SSA does not routinely use state investigative information to help identify fraudulent SSI applications. A more comprehensive, programwide strategy for ensuring that only eligible people receive SSI benefits could include, for example, requiring that SSA’s own bilingual staff or contractors conduct interviews with non-English-speaking applicants and exploring the use of videoconferencing technology, which would maximize the use of SSA bilingual staff, if SSA determines that the benefits outweigh the costs. The Congress, SSA, and several states have initiated various efforts to prevent or detect fraudulent SSI claims involving middlemen. Some of the efforts, such as passage of new legislation, have been completed; others are in progress. A discussion of some of these initiatives follows. (See app. I for a detailed list of initiatives.) The legislation that established SSA as an independent agency, the Social Security Independence and Program Improvements Act of 1994, contained provisions for expanding SSA’s authority to prevent, detect, and terminate fraudulent claims for SSI benefits. Some of the law’s provisions did the following: changed the federal crime of SSI fraud from a misdemeanor to a felony; gave SSA the authority to impose civil penalties against any person or organization determined to have knowingly caused a false statement to be made in connection with an SSI claim; and gave SSA the authority to request immigrant medical data and other information from INS and the Centers for Disease Control for use in eligibility determinations. The provisions of the law that relate to SSI reflect legislative recommendations that were made by the Subcommittee on Oversight and the Subcommittee on Human Resources, House Committee on Ways and Means, in May 1994. The Subcommittees also made several administrative recommendations to SSA. SSA established a task force in April 1993 to combat middleman fraud. In large part as a result of the work of the task force, SSA has initiated various efforts to detect and prevent middleman fraud. Because many of these initiatives are in the planning stages or the early stages of implementation, however, it is too soon to evaluate their effectiveness. One effort under way, as mentioned earlier, is the development of a nationwide database to help SSA and DDS offices monitor middlemen. The database is expected to be useful in identifying reliable interpreters and in identifying and tracking middlemen whose activities are questionable. Because all SSA and DDS offices are expected to have access to the database, an office that encounters a new interpreter will be able to determine from the database if other offices have had experience with the same person. A second task force initiative, which resulted largely from February 1994 hearings on middleman fraud, implements one of the provisions in the legislation that established SSA as an independent agency. As of March 1994, SSA requires that all non-SSA interpreters fill out a form on which they provide their name, address, and relationship to the applicant and sign a statement that they are providing an accurate translation. These forms are being maintained in each claimant’s file, providing a potentially valuable body of information. SSA officials said that these files may eventually be incorporated into the database. Another task force effort has resulted in SSA plans to review possibly fraudulent cases involving middlemen for which benefits are already being paid. In California, SSA identified many potentially fraudulent cases as a result of an ad hoc cooperative venture between the state and SSA. (See following section on state initiatives.) SSA plans to conduct 600 CDRs in California. As of April 26, 1995, 386 CDRs had been completed in California, resulting in 207 initial benefit terminations. These terminations are subject to appeal, and thus far about 60 percent have been appealed. In Washington, potentially fraudulent cases were identified as a result of an intergovernmental task force effort. (See following section on state initiatives.) SSA has begun to do 460 reviews in Washington, but none have been completed yet. SSA also reported that its ultimate goal is to dramatically reduce reliance on middlemen in developing the claims of non-English-speaking applicants. SSA is trying several approaches to reduce the use of middlemen as interpreters. First, SSA continues to encourage bilingual hiring in its field offices to improve service delivery to the non-English-speaking public. SSA reported that in fiscal year 1993, 266 of 533 permanent field office hires (50 percent) were bilingual; in fiscal year 1994, 481 of 1,099 such hires (44 percent) were bilingual. In addition, in February 1995, SSA officials reported that a statement of work was being prepared for a pilot contract to test the feasibility of using contract interpreter services to supplement SSA’s own interpreter staff. But the funding for the pilot has been reduced to $100,000, so only a limited number of SSA offices will receive contract services under the pilot. SSA officials doubt that a national contract for interpreter services is feasible, given anticipated costs. Furthermore, in 1994, SSA expanded upon efforts of at least 2 regional offices by asking all 10 regional offices to establish directories of bilingual employees who were available to help other field offices by interpreting during telephone interviews. Many of the 13 SSA field offices we visited expressed a need for more bilingual staff; only 1 reported having used a bilingual SSA employee from another field office to interpret by telephone. Finally, individual field offices have also looked to external sources, such as local advocacy groups, professional translation and interpreter services, and community service centers, for interpreting assistance. At least two field offices have made arrangements with universities and institutes for students to earn credits or serve internships for performing interpreter services. Several states have been active in seeking more effective fraud prevention and detection approaches. Again, many of these initiatives are in the early stages of implementation, so it is too soon to evaluate their success. One initiative involves the use of independent or state-certified interpreters at CEs, a practice currently employed in Pennsylvania and Minnesota. In California, if fraud is suspected or if there is reason to believe that the claimant’s interpreter is not objective or qualified, the state DDS pays for an independent interpreter for the CE or uses someone from a community assistance group or other reliable source. Massachusetts and Connecticut DDS offices use paid interpreters as much as possible and encourage CE providers to require positive identification from the person being examined. In addition, California has initiated a pilot project to establish a fraud investigation unit in one of its DDS offices. With SSA approval and assistance, the state plans to hire and train investigators to pursue fraudulent SSI disability claims. Investigations will be based on suspected fraud referrals from DDS staff. Also in California, an ad hoc cooperative venture between the state Medicaid agency and SSA yielded useful information. When the state requested assistance from the HHS OIG on some of their Medicaid fraud investigations, SSA had the opportunity to obtain the names of patients of providers who had been arrested or convicted of Medicaid fraud, as well as the names of clients of middlemen who used these medical providers. SSA then compared these names to those in their database of current SSI claimants, to flag claimants who might have been collecting benefits fraudulently. Since July 1992, 6,062 potentially fraudulent claimants have been identified in California, many as a result of the cooperation between the state, the HHS OIG, and SSA. Furthermore, during 1993 and 1994, California reported 22 arrests or convictions of providers, middlemen, and their assistants. Finally, Washington State formed an intergovernmental task force in 1992 in one county to investigate middlemen and others suspected of fraud.Under the direction of the U.S. Attorney, the task force has identified 460 suspected fraudulent claims involving middlemen. In 1994, three middlemen, three SSI recipients, and several others were arrested or convicted. SSA has awarded SSI benefits to unknown numbers of non-English-speaking immigrants who are actually ineligible for SSI benefits. These awards are very costly to the government, accounting in each case for thousands of dollars in improper payments over the years. Although individual SSA field offices have been creative in developing their own approaches to dealing with the problem, SSA’s programwide efforts to ensure that only people who are eligible for SSI benefits receive them have been limited. SSA’s responses to SSI fraud have included publishing guidance for SSA interviews. If the interviewer believes that the interpreter may be providing inaccurate information, the interview should be terminated until an interpreter who meets SSA criteria for a qualified interpreter can be provided. SSA also plans to improve communication with and outreach efforts to the non-English-speaking community, and it plans to develop a quality assurance program for interpretations. A more effective programwide strategy for ensuring that only eligible people obtain SSI benefits would require consistent, programwide practices for obtaining more accurate applicant information, maintaining and sharing information on interpreters and middlemen among field offices, and using the work of other government agencies to help identify potentially fraudulent cases. A comprehensive strategy should consider cost-benefit analyses of SSA’s alternatives for addressing the problem, SSA’s limited resources, and applicants’ need for timely service. Such a strategy could involve, for example, SSA requiring that its own bilingual staff or contractors conduct interviews with non-English-speaking applicants and exploring the use of videoconferencing technology, which, as mentioned earlier, could take best advantage of SSA bilingual staff. These components of a programwide strategy would further prevent claimants from using middlemen to manipulate the system. We recommend that the Commissioner of Social Security develop a more aggressive, programwide strategy for improving the quality of information obtained from applicants, maintaining and sharing data collected on interpreters and middlemen among field offices, and using information that results from the work of other government agencies—local, state, and federal—to pursue cases in which fraud is suspected. Such a strategy should include developing improved ways to more effectively manage SSA’s resources to further facilitate communications with applicants, possibly by requiring that SSA bilingual staff or SSA contracted staff conduct the interviews and by exploring videoconferencing technology. This strategy should also include instituting procedures for sharing, among field offices, the information SSA has already collected about interpreters and middlemen from its required forms and other sources, until the automated interpreter database is established, and establishing a mechanism to facilitate regular sharing of all state Medicaid agencies’ investigative results with SSA. SSA agreed with the intent of our recommendations and stated that it is exploring these recommendations as it continues its efforts to minimize fraud in cases involving middlemen. For example, SSA cited a pilot currently under way in California wherein state investigators are reviewing cases referred from DDS for possible prosecution under state and local laws. SSA also suggested the following change to our report concerning whether SSA’s practices permit non-English-speaking applicants to use middlemen: “SSA officials explained that SSA is attempting to address the fraud problem within the framework of its efforts to provide all non-English-speaking claimants convenient, accessible, and timely service in an environment of limited bilingual staff and funding. Experience suggests that the vast majority of non-English-speaking claimants are not involved in fraudulent activity. Therefore, to meet customer service needs and save resources, SSA does allow the non-English-speaking claimant the option of providing his or her own interpreter as long as the interpreter agrees to provide an exact interpretation of the claimant’s response and can function as a capable interpreter. However, if, during the course of the interview, the interviewer believes that the interpreter is not acting in the claimant’s best interest or is not providing accurate information, the interview is terminated. The interview is then rescheduled for a later date when another interpreter can be provided by SSA.” We believe that despite its staffing and funding constraints, concerns with claim processing times, and current efforts to address fraud, SSA can do more to reduce the SSI program’s vulnerability to fraudulent applications involving middlemen. Given that each person collecting illegal SSI benefits costs the program thousands of dollars a year, SSA must aggressively pursue any available opportunity such as those we have recommended to further minimize unwarranted outlays of federal monies so that it can increase the public’s confidence in this important program. The agency also made other technical comments that we incorporated throughout the report as appropriate. (See app. II.) As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the appropriate congressional committees and federal agencies. Copies also will be available to others on request. If you or your staff have any questions concerning this report, please call me on (202) 512-7215. Other GAO contacts and staff acknowledgments are listed in appendix III. The federal crime of SSI fraud has been elevated from a misdemeanor to a felony. SSA now has the authority to impose civil penalties against any person or organization determined to have knowingly caused a false statement to be made in connection with an SSI claim. Third-party translators are now required to certify under oath the accuracy of the translation provided and the relationship between the translator and the SSI applicant or recipient. SSA now has enhanced authority to redetermine eligibility and give less weight to evidence of disability in those cases where SSA has a reason to believe that fraud was involved and to expeditiously terminate benefits in those cases where there is insufficient reliable evidence of disability or other basis for eligibility. SSA now has the authority to request medical data and other information from the Immigration and Naturalization Service and the Centers for Disease Control for use in disability determination. The cognizant Office of Inspector General (OIG) is required to make SSI recipient identifying information available to SSA as soon as OIG has reason to believe that fraud is involved and an active investigation will not be compromised. SSA is required to report annually to the House Committee on Ways and Means and the Senate Committee on Finance the extent to which it has used its authority to conduct reviews of SSI cases, including the extent to which these cases involved probable fraud. SSA plans to develop a nationwide database to help SSA and disability determination services (DDS) offices monitor middlemen. SSA now requires that all non-SSA interpreters provide their name, address, and relationship to the claimant and certify that they are providing an accurate translation. SSA has implemented plans to conduct reviews of suspected fraudulent claims of identified SSI recipients. About 400 continuing disability reviews have been completed in California, and 460 reviews are being started in Washington. Additional reviews will be started as resources permit. Efforts to improve the availability of reliable interpreters include encouraging the field offices to hire more bilingual staff, testing the feasibility of contract interpreter services, developing alternative sources of community interpreters, and establishing regional directories of bilingual staff. SSA has published new guidance that includes criteria for identifying qualified or reliable interpreters and terminating interviews with suspect middlemen. SSA plans to develop a quality assurance program for interpretations, to develop a better procedure for processing fraud referrals, and to improve communication with and outreach efforts to the non-English-speaking community. DDS in California, Washington, Pennsylvania, Minnesota, Massachusetts, and Connecticut have begun to use independent or state-certified interpreters at consultative exams (CE). California has instituted a pilot project, funded by SSA, that established an SSI fraud investigation unit in one of its DDS offices. California shared information about some of its fraud investigations of medical providers with SSA, which has used the information to identify potentially fraudulent SSI claimants. Washington has created an intergovernmental task force to investigate middlemen suspected of fraud. Massachusetts and Connecticut DDS offices encourage CE medical providers to require positive identification from claimants. The Texas DDS tries to use bilingual CE providers. In addition to those named above, the following individuals also made important contributions to this report: Elizabeth A. Olivarez, Clarence Tull, Zachary R. White, and Michael J. Ross, Evaluators; Eli Kuo, Intern; Nancy L. Crothers and Jonathan M. Silverman, Communications Analysts; James P. Wright, Assistant Director (Study Design and Data Analysis); and Stephen R. Myerson, Assistant Director (Investigations). Supplemental Security Income: Growth and Changes in Recipient Population Call for Reexamining Program (GAO/HEHS-95-137, July 7, 1995). SSI Disability Issues (GAO/HEHS-95-154R, May 11, 1995). Social Security: Federal Disability Programs Face Major Issues (GAO/T-HEHS-95-97, Mar. 2, 1995). Welfare Reform: Implications of Proposals on Legal Immigrants’ Benefits (GAO/HEHS-95-58, Feb. 2, 1995). Supplemental Security Income: Recent Growth in the Rolls Raises Fundamental Program Concerns (GAO/T-HEHS-95-67, Jan. 27, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed fraudulent claims for disability benefits under the Supplemental Security Income (SSI) program, focusing on: (1) the extent of fraudulent applications submitted by non-English speaking immigrants using middlemen; (2) factors that contribute to SSI vulnerability to such fraudulent applications; and (3) government initiatives to combat such fraudulent activities. GAO found that: (1) although the Social Security Administration (SSA) has been aware of allegations of SSI fraud related to the use of middlemen since 1990, the number of applicants who have obtained SSI benefits illegally through the use of middlemen is unknown; (2) the number of immigrants receiving SSI disability benefits rose from 45,000 in 1983 to 267,000 in 1993; (3) in California, about 6,000 potentially fraudulent applications have been identified, of which about 30 percent represent SSI claims being paid; (4) ineligible SSI recipients can receive about $113,000 in SSI, Medicaid, and Food Stamp benefits by the time they are 65 years old; and (5) SSA has established a task force in California to combat fraudulent applications involving middlemen and has terminated benefits for 207 recipients, as of April 1995. In addition, GAO found that SSI is vulnerable to fraudulent applications involving middlemen because SSA: (1) management practices and bilingual staff shortages enable applicants to use middlemen; (2) performs only limited monitoring of middlemen; (3) has limited funds for investigations; (4) has not coordinated its efforts to monitor middlemen with state Medicaid agencies; and (5) needs a better strategy to keep ineligible applicants from ever being accepted on SSI rolls.
Through a number of legislative actions, Congress has indicated its desire that agencies create telework programs to accomplish a number of positive outcomes. These actions have included recognizing the need for program leadership within the agencies; encouraging agencies to think broadly in setting eligibility requirements; requiring that employees be allowed, if eligible, to participate in telework, and requiring tracking and reporting of program results. Some legislative actions have provided for funding to assist agencies in implementing programs, while other appropriations acts withheld appropriated funds until the covered agencies certified that telecommuting opportunities were made available to 100 percent of each agency’s eligible workforce. The most significant congressional action related to telework was the enactment of Sec. 359 of Pub. L. No. 106-346 in October 2000, which provides the current mandate for telework in the executive branch of the federal government by requiring each executive agency to establish a policy under which eligible employees may participate in telework.In this law, Congress required each executive branch agency to establish a telework policy under which eligible employees of the agency may participate in telework to the maximum extent possible without diminishing employee performance. The conference report language further explained that an eligible employee is any satisfactorily performing employee of the agency whose job may typically be performed at least 1 day per week by teleworking. In addition, the conference report required the Office of Personnel Management (OPM) to evaluate the effectiveness of the program and report to Congress. The legislative framework has provided both the General Services Administration (GSA) and OPM with lead roles for the governmentwide telework initiative—to provide services and resources to support and encourage telework, including providing guidance to agencies in developing their program procedures. In addition, Congress required certain agencies to designate a telework coordinator to be responsible for overseeing the implementation of telework programs and serve as a point of contact on such programs for the Committees on Appropriations. GSA and OPM provide services and resources to support the governmentwide telework implementation. OPM publishes telework guidance, which it recently updated, and works with the agency telework coordinators to guide implementation of the programs and annually report the results achieved. GSA offers a variety of services to support telework, including developing policy concerning alternative workplaces, managing the federal telework centers, maintaining the mail list server for telework coordinators, and offering technical support, consultation, research, and development to its customers. Jointly, OPM and GSA manage the federal Web site for telework, which was designed to provide information and guidance. The site provides access for employees, managers, and telework coordinators to a range of information related to telework including announcements, guides, laws, and available training. Although agency telework policies meet common requirements and often share some common characteristics, each agency is responsible for developing its own policy to fit its mission and culture. According to OPM, most agencies have specified occupations that are eligible for telework and most apply employee performance-related criteria in considering authorizing telework participation. In addition, OPM guidance states that eligible employees should sign an employee telework agreement and be approved to participate by their managers. The particular considerations concerning these requirements and procedures will differ among agencies. In our 2003 study of telework in the federal government, we identified 25 key practices that federal agencies should implement in developing their telework programs. Among those were several practices closely aligned with managing for program results including developing a business case for implementing a telework program; establishing measurable telework program goals; establishing processes, procedures, or a tracking system to collect data to evaluate the telework program; and identifying problems or issues with the telework program and making appropriate adjustments. Yet, in our assessment of the extent to which four agencies—the Department of Education, GSA, OPM, and the Department of Veterans Affairs—followed the 25 key practices, we found these four practices to be among the least employed. None of the four agencies we reviewed had effectively developed a business case analysis for implementing their telework programs. In discussing the business case key practice in our 2003 study, we cited the International Telework Association and Council, which had stated that successful and supported telework programs exist in organizations that understand why telework is important to them and what specific advantages can be gained through implementation of a telework program. According to OPM, telework is of particular interest for its advantages in the following areas: Recruiting and retaining the best possible workforce—particularly newer workers who have high expectations of a technologically forward-thinking workplace and any worker who values work/life balance. Helping employees manage long commutes and other work/life issues that, if not addressed, can reduce their effectiveness or lead to employees leaving federal employment. Reducing traffic congestion, emissions, and infrastructure effect in urban areas, thereby improving the environment. Saving taxpayer dollars by decreasing government real estate costs. Ensuring continuity of essential government functions in the event of national or local emergencies. In addition, some federal agency telework policies suggest other potential advantages. For example, the Department of Defense’s telework policy includes enhancing the department’s efforts to employ and accommodate people with disabilities as a purpose of its program. The Department of State’s policy notes that programs may be used to increase productivity. As another example, the U.S. Department of Agriculture credits telework with having a positive effect on sick leave usage and workers compensation. A business case analysis of telework can ensure that an agency’s telework program is closely aligned with its own strategic objectives and goals. Such an approach can be effective in engaging management on the benefits of telework to the organization. Making a business case for telework can help organizations understand why they support telework, address relevant issues, minimize business risk, and make the investment when it supports their objectives. Through business case analysis, organizations have been able to identify cost reductions in the telework office environment that offset additional costs incurred in implementing telework and the most attractive approach to telework implementation. We have recently noted instances where agency officials cited their telework programs as yielding some of the benefits listed above. For example, in a 2007 report on the U.S. Patent and Trademark Office (USPTO), we reported that, according to USPTO management officials, one of the three most effective retention incentives and flexibilities is the opportunity to work from remote locations. In fiscal year 2006, approximately 20 percent of patent examiners participated in the agency’s telework program, which allows patent examiners to conduct some or all of their work away from their official duty station 1 or more days per week. In addition, USPTO reported in June 2007 that approximately 910 patent examiners relinquished their office space to work from home 4 days per week. The agency believes its decision to incorporate telework as a corporate business strategy and for human capital flexibility will help recruitment and retention of its workforce, reduce traffic congestion in the national capital region, and, in a very competitive job market, enable the USPTO to hire approximately 6,000 new patent examiners over the next 5 years. As another example, in a 2007 report on the Nuclear Regulatory Commission (NRC), we noted that most NRC managers we interviewed and surveyed considered telework and flexible work schedule arrangements to be very to extremely valuable in recruiting, hiring, and retaining NRC personnel and would be at least as valuable in the next few years. With regard to the second key practice aligned with managing for results, none of the four agencies had established measurable telework program goals. As we noted in our report, OPM’s May 2003 telework guide discussed the importance of establishing program goals and objectives for telework that could be used in conducting program evaluations for telework in such areas as productivity, operating costs, employee morale, recruitment, and retention. However, even where measurement data are collected, they are incomplete or inconsistent among agencies, making comparisons meaningless. For example, in our 2005 report of telework programs in five agencies—the Departments of State, Justice, and Commerce; the Small Business Administration; and the Securities and Exchange Commission—measuring eligibility was problematic. Three of the agencies excluded employees in certain types of positions (e.g., those having positions where they handle classified information) when counting and reporting the number of eligible employees, while two of the agencies included all employees in any type of position when counting and reporting the number of eligible employees, even those otherwise precluded from participating. With regard to the third key practice—establishing processes, procedures, or a tracking system to collect data to evaluate the telework program—in our 2003 review we found that none of the four agencies studied were doing a survey specifically related to telework or had a tracking system that provided accurate participation rates and other information about teleworkers and the program. At that time, we observed that lack of such information not only impeded the agencies in identifying problems or issues related to their programs but also prevented them from providing OPM and Congress with complete and accurate data. In addition, in our 2005 study at five agencies, we found that four of the five agencies measured participation in telework based on their potential to telework rather than their actual usage. The fifth agency reported the number of participants based on a survey of supervisors who were expected to track teleworkers. According to OPM, most agencies report participation based on telework agreements, which can include both those for employees teleworking on a continuing basis as well as those for episodic telework. None of the five agencies we looked at had the capability to track who was actually teleworking or how frequently, despite the fact that the Fiscal Year 2005 Consolidated Appropriations Act covering those agencies required each of them to provide quarterly reports to Congress on the status of its telework program, including the number of federal employees participating in its program. At that time, two of the five agencies said they were in the process of implementing time and attendance systems that could track telework participation, but had not yet fully implemented them. The other three agencies said that they did not have time and attendance systems with the capacity to track telework. “The conferees are troubled that many of the agencies’ telework programs do not even have a standardized manner in which to report participation. The conferees expect each of these agencies to implement time and attendance systems that will allow more accurate reporting.” Despite this language, four of the five agencies have not yet developed such systems and are still measuring participation as they did in 2005. In the fifth agency—the Department of Justice—an official told us that the department has now implemented a Web-based time and attendance system in most bureaus and that this system allows the department to track actual telework participation in those bureaus. The Federal Bureau of Investigation (FBI) was the major exception. This fiscal year, however, the FBI began a pilot of a time and attendance application that will also have the ability to track telework. Upon completion of the pilot, the official said that all of the Department of Justice bureaus would have the ability to track telework. As for the fourth key practice closely related to managing for program results—identifying problems or issues with the telework program and making appropriate adjustments—none of the four agencies we reviewed for our 2003 study had fully implemented this practice and one of the four had taken no steps to do so despite the importance of using data to evaluate and improve their telework programs. An OPM official told us, for example, that she did not use the telework data she collected to identify issues with the program; instead, she relied on employees to bring problems to her attention. To help agencies better manage for results through telework programs, in our 2005 study we had said that Congress should determine ways to promote more consistent definitions and measures related to telework. In particular, we suggested that Congress might want to have OPM, working through the Chief Human Capital Officers (CHCO) Council, develop a set of terms, definitions, and measures that would allow for a more meaningful assessment of progress in agency telework programs. Program management and oversight could be improved by more consistent definitions, such as eligibility. Some information may take additional effort to collect, as for example, on actual usage of telework. Other valuable information may already be available through existing sources. The Federal Human Capital Survey, for example—which is administered biennially—asks federal employees about their satisfaction with telework, among other things. In the latest survey, only 22 percent indicated they were satisfied or very satisfied, while 44 percent indicated they had no basis to judge—certainly, there seems to be room for improvement there. In any case, OPM and the agency CHCO Council are well situated to sort through these issues and consider what information would be most useful. The CHCO Council and OPM could also work together on strategies for agencies to use the information for program improvements, including benchmarking. In conclusion, telework is a key strategy to accomplish a variety of federal goals. Telework is an investment in both an organization’s people and the agency’s capacity to perform its mission. We continue to believe that more fully implementing the practices related to managing for program results will significantly contribute to improving the success of federal telework programs. Mr. Chairman and members of the subcommittee, this completes my statement. I would be pleased to respond to any questions that you may have. For further information on this testimony, please contact Bernice Steinhardt, Director, Strategic Issues, at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony include William J. Doherty, Assistant Director; Joyce D. Corry; and Judith C. Kordahl. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Telework continues to receive attention within Congress and federal agencies as a human capital strategy that offers various flexibilities to both employers and employees. Increasingly recognized as an important means to achieving a number of federal goals, telework offers greater capability to continue operations during emergency events, as well as affording environmental, energy, and other benefits to society. This statement highlights some of GAO's prior work on federal telework programs, including key practices for successful implementation of telework initiatives, identified in a 2003 GAO report and a 2005 GAO analysis of telework program definitions and methods in five federal agencies. It also notes more recent work where agency officials cite their telework programs as yielding benefits. As GAO has previously recommended, Congress should determine ways to promote more consistent telework definitions and measures. In particular, Congress might want to have the Office of Personnel Management (OPM) and the Chief Human Capital Officers Council develop definitions and measures that would allow for a more meaningful assessment of progress in agency telework programs. Through a number of legislative actions, Congress has indicated its desire that agencies create telework programs to accomplish a number of positive outcomes. Many of the current federal programs were developed in response to a 2000 law that required each executive branch agency to establish a telework policy under which eligible employees may participate in telecommuting to the maximum extent possible without diminishing employee performance. The legislative framework has provided OPM and the General Services Administration with lead roles for the governmentwide telework initiative--providing services and resources to support and encourage telework. Although agency telework policies meet common requirements and often share characteristics, each agency is responsible for developing its own policy to fit its mission and culture. In a 2003 report, GAO identified a number of key practices that federal agencies should implement in developing their telework programs. Four of these were closely aligned with managing for program results: (1) developing a business case for telework, (2) establishing measurable telework program goals, (3) establishing systems to collect data for telework program evaluation, and (4) identifying problems and making appropriate adjustments. None of the four agencies we reviewed, however, had effectively implemented any of these practices. In a related review of five other agencies in 2005, GAO reported that none of the agencies had the capacity to track who was actually teleworking or how frequently, relying mostly on the number of telework agreements as the measure of program participation. Consistent definitions and measures related to telework would help agencies better manage for results through their telework programs. For example, program management and oversight could be improved by more consistent definitions, such as eligibility. Some information may take additional efforts to collect, for example, on actual usage of telework rather than employees' potential to telework. However, other valuable information may already be available through existing sources, such as the Federal Human Capital Survey. The survey--which is administered biennially--asks federal employees about their satisfaction with telework, among other things. OPM and the Chief Human Capital Officers Council are well-situated to sort through these issues and consider what information would be most useful. The council and OPM could also work together on strategies for agencies to use the information for program improvements, including benchmarking.
This section provides information on avian influenza viruses; avian influenza transmission in and between humans and animals; commercial and noncommercial poultry production in the United States; our prior work on avian influenza; and the responsibilities that USDA, HHS, and Interior have with respect to avian influenza response, research, surveillance, and other related activities. Avian influenza is caused by a “Type A” influenza virus (influenza A). Avian-origin influenza viruses are broadly categorized based on a combination of two groups of proteins on the surface of the influenza A virus: hemagglutinin or “H” proteins, of which there are 16 (H1-H16), and neuraminidase or “N” proteins, of which there are 9 (N1-N9). Many different combinations of “H” and “N” proteins are possible. Each H and N combination is considered a different subtype, and related viruses within a subtype may be referred to as a lineage. Avian influenza viruses can be divided into two groups based on the specific genetic features and severity of the disease they cause in 4- to 8-week old chickens in a laboratory setting: low pathogenic and the more severe highly pathogenic. Influenza A has the potential to cause human pandemics, regardless of its pathogenicity in poultry. Wild aquatic birds—such as waterfowl, gulls, and shorebirds—are the natural hosts for influenza A viruses. Direct or indirect contact with infected wild birds can expose poultry to avian influenza viruses. Similarly, infected poultry may spread avian influenza into wild bird populations. Avian influenza viruses can also be moved from place to place—including between farms—by people, equipment, vehicles, feed, insects, rodents and other animals, water, and wind-blown dust as shown in figure 1. Poultry producers may implement biosecurity measures to reduce the risk that diseases such as avian influenza will be transmitted to their flocks. For example, producers may disinfect vehicles arriving at and leaving a farm or direct employees to disinfect boots and hands before entering a poultry barn. During an outbreak in poultry, additional biosecurity measures may be used to prevent the disease from further spreading. For example, USDA personnel and contractors working to control an outbreak would be expected to restrict their movements among locations to prevent carrying the virus to an uninfected site. One form of highly pathogenic avian influenza has become endemic in several countries, including China, Indonesia, and Vietnam; this means that the virus has become entrenched in poultry populations in those countries. USDA considers highly pathogenic avian influenza a “foreign animal disease” in the United States, meaning U.S. poultry are normally free from the disease. The United States, as a member of the World Organisation for Animal Health, has agreed (through USDA), along with other member countries, to notify the organization and its members of any detection of highly pathogenic avian influenza. Member countries also agree to report cases of low pathogenic H5 or H7 avian influenza found in poultry or other birds because these viruses have the potential to mutate to a highly pathogenic form in poultry and may infect other species. When a country’s poultry tests positive for “notifiable” avian influenza, its international trading partners may restrict trade with that country until the partners believe the virus is eradicated—an outcome that can take many months to achieve. Therefore, when a flock is infected with notifiable H5 or H7 avian influenza, the goal of the poultry industry and government agencies is to control and eradicate the virus as rapidly as possible in order to prevent its spread and regain the confidence of trading partners that any future imports of poultry or poultry products will be virus free. To this end, USDA and other federal, state, and industry partners aim to act quickly in the affected area to, among other things: (1) quarantine susceptible animals; (2) implement biosecurity measures; (3) depopulate infected and exposed birds; (4) dispose of contaminated and potentially contaminated materials, including animal carcasses; and (5) clean and disinfect the infected premises. Once the virus is eradicated, USDA, states, and the poultry industry resume routine surveillance for notifiable avian influenza. According to USDA’s Economic Research Service, the U.S. poultry industry is the world’s largest producer and second-largest exporter of poultry. The most recent Census of Agriculture reported 233,770 poultry farms in the United States in 2012, but the U.S. poultry industry consists, in large part, of a relatively small number of large companies that own all aspects of the production process—from the hatchery to the processing facility. The most common types of poultry raised commercially are chickens for consumption (broilers) and chickens that lay eggs (layers), as well as turkeys. There are also poultry that are genetic breeding stock and whose main function is to produce offspring that facilitate mass production and are economical to raise. Additionally, there are poultry raised specifically for producing eggs to make human vaccines. Commercial poultry operations typically raise tens of thousands of birds in confined poultry houses. Such operations can include multiple houses located close to each other. Because of the environment in which commercial birds are raised, if one bird becomes infected with a notifiable avian influenza, hundreds of thousands of birds can be exposed and will need to be depopulated. In addition to poultry raised commercially, numerous types of birds are raised in backyards, with flocks of up to 1,000 birds. These “backyard birds” are typically chickens used for personal egg production and consumption; they also can include game birds, such as quail and pheasant. These birds may roam free or be confined to a poultry house. In addition, there are birds in live bird markets—facilities that sell live poultry, typically slaughtered on-site, to the general public—and some are sold at auctions and swap meets. In a June 2007 report, we found that USDA had made important strides to prepare for highly pathogenic avian influenza outbreaks but that incomplete planning and other unresolved issues could slow a response. There were several unresolved issues at the time that, absent advance consideration, could hinder response. For example, we found that disposal of carcasses and materials infected with highly pathogenic avian influenza could be problematic because operators of landfills were reluctant to accept materials infected with even low pathogenic avian influenza because of the perceived human health risk. To increase the likelihood of rapidly containing a highly pathogenic avian influenza outbreak, we made seven recommendations to USDA, including that the agency develop a response plan that identifies critical tasks for responding to an outbreak and address concerns about antiviral medication for humans. USDA generally agreed with our recommendations and took action to implement all seven recommendations. (A list of prior related GAO work is included at the end of this report.) Multiple organizations within USDA support its animal health mission. When notifiable avian influenza outbreaks occur, APHIS is the lead agency within USDA for preventing and responding to animal disease outbreaks. USDA derives its authority to carry out operations and measures to prevent, detect, control, and eradicate notifiable avian influenza, among other diseases, from the Animal Health Protection Act. The act authorizes the Secretary of Agriculture to hold, seize, quarantine, treat, destroy, or dispose of any animal, means of conveyance, or object that can harbor the disease, or to restrict their movement in interstate commerce. The act also authorizes the Secretary to transfer necessary funds from other USDA appropriations or available funds to manage an emergency in which a disease of livestock threatens any segment of agricultural production in the United States, in order to arrest, control, eradicate, or prevent the spread of the disease. USDA’s Wildlife Services, a program unit within APHIS, conducts research on wildlife diseases, such as avian influenza, that may affect agriculture and human health and safety. USDA’s Agricultural Research Service conducts research on, among other things, poultry diseases and vaccines for those diseases. For example, the agency published a report in 2014 on experts’ analyses of gaps in knowledge about influenzas in poultry and other animals and about effective countermeasures to control and mitigate outbreaks of disease. HHS is responsible for, among other things, research on human disease, disease surveillance, and vaccine production and distribution. Within HHS, the Influenza Division of CDC’s National Center for Immunization and Respiratory Diseases conducts surveillance of influenza in humans, including human infections caused by viruses with animal origins; the division also conducts laboratory studies on influenza viruses of concern to characterize them and assess their risks to humans. HHS’s Food and Drug Administration (FDA) is responsible for protecting the public health by ensuring the safety and efficacy of veterinary drugs and medical devices and by licensing biological products that are safe, pure, and potent, including vaccines for pandemic influenza. In addition, FDA is responsible for ensuring the safety and proper labeling of more than 80 percent of the U.S. food supply. In cooperation with USDA’s Wildlife Services program and state agencies, Interior participates in the federal government’s surveillance of wild migratory birds for the presence of avian influenza and provides leadership and support in the area of wildlife disease research and diagnostics. Interior’s U.S. Geological Survey maintains the National Wildlife Health Center, which identifies, controls, and prevents wildlife losses from diseases; conducts research to understand the impact of diseases on wildlife populations; and devises methods to more effectively manage these disease threats. (See app. I for more detail on the roles of federal departments and their component agencies as related to avian influenza.) Avian influenza viruses have harmed global human and animal health and the U.S. economy. These viruses rarely infect humans, but some viruses may have high rates of mortality when they do. Avian influenza outbreaks have led to the deaths of hundreds of millions of domesticated poultry in dozens of countries, either directly or through depopulation to prevent spread of the disease. The 2014 and 2016 outbreaks among U.S. poultry led to costs to the federal government of about $930 million and additional costs to the U.S. economy of an estimated $1 billion or more. As of March 2017, two lineages of avian influenza—Asian H5N1, which emerged in 1997, and a new strain of H7N9, which emerged in 2013— have together infected more than 2,100 humans and killed more than 900, primarily in Asia and Africa. Neither lineage has developed the capacity to be easily transmissible from birds to humans or from person to person. However, there have been other instances in which influenza A viruses of avian origin have become more easily transmissible and have caused global pandemics that led to large numbers of fatalities in the United States and around the world. Table 1 summarizes occurrences of fatal influenza A infections in humans that are known to have or are suspected of having an avian origin. The likelihood that an influenza A virus of avian origin will evolve into a form easily transmissible among humans is small, according to officials from HHS, but if such a change occurs, it could lead to serious disease among humans and possibly another pandemic. For example, the World Health Organization has expressed concern that the Asian lineage H5N1 and H7N9 viruses that have sporadically infected humans in Asia, Northern Africa, and the Middle East could evolve to become more easily transmissible to or between humans and lead to serious disease or another pandemic. According to CDC’s website, of the novel influenza A viruses that are of special concern to public health, the agency rates the Asian lineage H7N9 virus as having the greatest potential to cause a pandemic, as well as potentially posing the greatest risk to severely impact public health. Avian influenza outbreaks—both highly pathogenic and low pathogenic— have led to the deaths of hundreds of millions of domesticated poultry in dozens of countries, either directly or through depopulation to prevent spread of the disease. For example, the H5N1 highly pathogenic avian influenza outbreak that led to human fatalities in China in 1997 also led to the deaths of an estimated 220 million birds in China and Hong Kong. In the United States, outbreaks of highly pathogenic avian influenza have led to the deaths of more than 67 million birds since 1983, with the most recent outbreaks beginning in December 2014 and ending in June 2015 and, in unrelated incidents, in January 2016 and March 2017. (See table 2 for details of known outbreaks of highly pathogenic avian influenza in commercial U.S. poultry.) USDA identified the first U.S. cases of the 2014 outbreak of highly pathogenic avian influenza H5 viruses in captive wild birds or backyard flocks in Washington and Oregon in December 2014 and in Idaho the following month. Also in December 2014, USDA identified another subtype, H5N8, in Washington and Oregon. By the time USDA and its state and industry partners eradicated the diseases in June 2015, the related H5N2 and H5N8 viruses had infected poultry flocks on 232 farms in 15 states, with the largest number of affected farms being in Minnesota (110 farms) and Iowa (77 farms). (See fig. 2 for a map showing the 15 states and the approximate number of birds killed or depopulated as a result of the outbreak that began in 2014.) Avian influenza is an extremely infectious and, in some circumstances, fatal disease in poultry, including chickens and turkeys. Avian influenza viruses are classified as either “low pathogenic” or “highly pathogenic” based on their genetic features and the severity of the disease they cause in poultry. Beginning in early December 2014, the Canadian Food Inspection Agency (CFIA), w hich leads responses to avian influenza outbreaks in Canada, learned of highly pathogenic avian influenza on 13 poultry farms in British Columbia; these included turkey and chicken farms. To eradicate the virus, CFIA depopulated 240,000 birds. Wild birds migrating along the Pacific Flyw ay were the most likely cause of the outbreak, according to CFIA. In April 2015, CFIA identified highly pathogenic avian influenza in 1 chicken farm and 2 turkey farms in Ontario. The agency controlled the virus by depopulating 79,700 birds. According to a CFIA official, two characteristics of the Canadian poultry industry that facilitate the adopting of biosecurity measures in poultry farms helped limit the size of the 2014 and 2015 Canadian outbreaks. First, poultry farms in Canada are relatively small compared w ith those in the United States, w hich reduces the number of birds infected and the chance that influenza w ill replicate and spread. Second, Canadian poultry companies are not heavily integrated; therefore there is little movement of birds, feed, equipment, and people that could carry the virus from one farm to another. State(s) Reported number and type of birds depopulated 4.7 million turkeys, including breeders, and chicken broilers, breeders, and layers Layers Turkey breeders 100,000 layers 30,000 layers 84,000 broilers 328,000 broilers 51,000 breeder chickens 145,000 turkeys 54,000 turkeys 25,600 turkeys 30,300 game birds 16,000 breeder chickens 500,000 turkeys 20,000 broiler breeders 29,000 turkeys 3,000 turkeys 9,800 broiler breeders 95,000 quail and 21,000 Peking ducks 352,114 turkeys and layers 39,000 turkeys 84,000 turkeys held in quarantine 100,585 broiler breeders and backyard birds 24,700 broiler breeders and backyard birds The Indiana outbreak w as restricted in size and scope to a single county and 12 premises. This table reports on poultry at sites associated with low pathogenic avian influenza. In all, more than 414,000 birds w ere affected. According to the Centers for Disease Control and Prevention (CDC), in November 2016, a low pathogenic avian influenza A (H7N2) virus infected cats in New York City animal shelters. Some affected cats showed mild flu- like symptoms such as sneezing or runny noses, and 450 w ere quarantined until they no longer show ed symptoms of infection. A veterinarian collecting respiratory samples from exposed cats contracted the virus and subsequently recovered. According to CDC’s w ebsite, known human infections with H7N2 are uncommon and have not led to deaths. How ever, the agency noted that finding avian influenza virus in an unexpected animal, such as a cat, is alw ays concerning because it means the virus has changed in a w ay that may pose a new health threat. The effect of avian influenza on the health of other animal species varies. Avian influenza generally causes few signs of illness and is rarely fatal when it circulates in waterfowl and shorebirds. Because wild birds are rarely sickened by the virus, they are able to move it efficiently along migratory flyways. Interior officials told us, however, that incidents in which wild birds have been killed by highly pathogenic avian influenza have become more common; these officials noted in particular incidents in Asia involving the H7N9 virus. According to the World Health Organization, some mammal species, including swine, can be infected with avian influenza but may show few, if any, observable symptoms, and others, such as ferrets, may experience high morbidity and mortality. Infections in waterfowl and swine are of concern because they can spread the virus to poultry and humans, according to the World Organisation for Animal Health. Swine can also serve as “mixing vessels” in which different influenza viruses come into contact, exchange genetic material, and possibly produce a new virus that is more easily transmissible to or between humans. The H1N1 virus that emerged in 2009 contained gene segments from swine, avian, and human influenza viruses. According to the World Health Organization, the virus caused a global pandemic with up to 550,000 human deaths worldwide from April 2009 to April 2010; the CDC estimates that up to 18,000 of those human deaths occurred in the United States. In addition, a 1976 outbreak of H1N1 swine influenza at Fort Dix, New Jersey, infected up to 230 humans and killed 1 person. More recently, according to CDC documents, more than 360 people in the United States were infected with influenza A (H3N2) variant influenza from August 2011 through September 2016, with one fatality. Also according to CDC, these infections have mostly been associated with prolonged exposure to pigs at agricultural fairs. Outbreaks of avian influenza in poultry in the United States in 2014 and 2016 cost the federal government about $930 million, according to USDA documents, and the 2014 outbreak cost the economy from $1 billion to $3.3 billion, according to two studies by USDA and a private firm. According to USDA budget documents for fiscal years 2015 and 2016, the agency obligated a total of about $869 million for the responses to the 2014 outbreak in 15 states, the January 2016 outbreak in Indiana, and a May 2016 outbreak of low pathogenic avian influenza in Missouri. As shown in table 4, the largest portion of these obligations was for response operations, including depopulation, disposal, composting, and cleaning and disinfection. Indemnity payments to poultry producers were another large category of obligations. Nearly all of the funds were transferred from the Commodity Credit Corporation. In addition, USDA obligated about $60 million in funds transferred from the Commodity Credit Corporation in fiscal year 2015 on fixed costs— such as salaries, benefits, and supplies—and other activities related to preparing for the possible return of the virus in the fall of 2015, such as wild bird surveillance and vaccine research. With respect to the U.S. economy, two separate analyses have examined and produced national estimates of the economic impacts from the highly pathogenic avian influenza outbreak that began in 2014. A national analysis conducted by USDA economists measured the 2014 outbreak’s impact to U.S. livestock and feed sectors, including poultry and poultry products, at $1 billion. The estimates in the analysis take into consideration producer and consumer behavior as prices and production changed in response to the reduction in production and the trade embargoes linked to the outbreak. The effects were measured throughout the course of the outbreak, allowing for estimates based on changes over time. According to this analysis, U.S. turkey producers lost an estimated $214 million in sales (a decline of 6.8 percent from 2014 levels), and broiler producers lost $276 million (a decline of 1.5 percent from 2014). While broilers were only negligibly affected by the virus, as separately reported by APHIS, the sector still suffered losses because of large decreases in demand from countries that extended full or partial bans on poultry and poultry products, including broilers, from the United States. In addition, because crops (e.g., corn and soybeans) are essential to the poultry sector, those commodities also experienced losses estimated at $373 million because of the reduction in number of birds fed. On the other hand, the reduced egg supply caused by the outbreak raised the price of eggs for consumers and, according to the analysis, led to an increase of $53 million in sales for U.S. egg and layer producers (an increase of 26.7 percent from 2015 levels). The second national analysis contained a preliminary estimate of $3.3 billion in total economy-wide losses through June 29, 2015, from the 2014 outbreak of highly pathogenic avian influenza. This included direct losses to the turkey and egg processing sectors of $1.6 billion, with losses of $530 million for turkeys and $1.04 billion for laying hens. The $3.3 billion estimate included macroeconomic impacts due to losses to other indirect sectors, such as retail and foodservice, but did not include activities such as clean-up, restocking, or future lost production while the producer prepares to resume production at a pre-disease level. Also, unlike the first analysis noted above, this analysis did not include consumer or producer responses to changes in prices or production, such as increases in egg prices due to production losses. Starting from the beginning of the avian influenza outbreak in December 2014, 18 trading partner nations imposed bans on all shipments of U.S. poultry and products, and 38 trading partners imposed partial, or regional, bans on shipments from states or parts of states experiencing outbreaks. According to USDA officials, as of January 2017, China, Kyrgyzstan, Russia, and Thailand continued to impose national bans on U.S. poultry imports that were attributed to concerns about highly pathogenic avian influenza, and Jamaica had imposed a state ban on U.S. poultry imports from several Midwestern states. Total U.S. poultry and product exports declined in value from about $6.4 billion in 2014 to $4.9 billion in 2015. The largest of these declines was from the U.S. broiler meat industry, which fell from $4.1 billion to $3.0 billion over that period. While a USDA report attributed part of this decline in exports to a strong U.S. dollar, the report also noted that the avian influenza outbreak that began in 2014 caused the poultry industry to lose market share to other poultry exporters such as Brazil. According to a September 2016 USDA report, export levels of broiler chickens—the largest poultry export sector—were modestly rebounding in 2016 from the levels that followed the end of the highly pathogenic avian influenza outbreak in 2015, although these 2016 levels were still at their lowest since 2011. In addition, according to the USDA report, turkey exports remained weak compared to the pre-avian influenza trends, and egg exports in July 2016 were 6 percent lower than the previous year. USDA noted that some major importing countries had lifted trade bans since the 2014 outbreak and that other factors, such as the strength of the dollar, have also affected exports. USDA officials involved in the response also said that the negative effect on U.S. poultry exports was partially mitigated by the fact that some countries imposed regional, rather than national, bans on U.S. poultry products. In addition, the agency’s implementation of secure food supply plans allowed poultry producers to move non-infected products during the outbreaks. A goal of these plans is to continue business operations from locations that are not infected with disease. After the March 2017 detections of highly pathogenic H7N9 avian influenza in two commercial flocks in Lincoln County, Tennessee, numerous countries imposed trade restrictions on U.S. poultry exports. For example, the Republic of Congo (Brazzaville) imposed restrictions on poultry imports from the entire United States. Some countries, such as South Africa, Taiwan, and Uruguay, placed restrictions on poultry imports from the entire state of Tennessee while others, such as Jamaica and the European Union, imposed restrictions on certain geographic areas or counties. Similarly, after detections of low pathogenic avian influenza in March 2017, countries placed restrictions on imported poultry from all or parts of Alabama, Georgia, Kentucky, and Wisconsin. USDA identified lessons learned from its responses to the 2014 and 2016 highly pathogenic avian influenza outbreaks and has taken numerous corrective actions to address them. However, USDA does not have plans for evaluating the extent to which the corrective actions have helped resolve the problems that they were intended to address. After the outbreaks of highly pathogenic avian influenza in 2014 and 2016, USDA identified lessons learned related to its response activities and has taken numerous corrective actions to address those lessons learned. To identify the lessons learned from both the widespread outbreak that began in 2014 and the limited 2016 outbreak in Indiana, USDA proactively collected feedback about its performance during and after the outbreaks from federal and state animal health officials and from industry representatives involved in the responses. The agency then summarized this feedback into after action reports that included observations about strengths and weaknesses in the responses. The identified lessons learned covered a wide range of response areas, including the depopulation of infected birds, disposal of bird carcasses, and surveillance of flocks for infection. For example, according to USDA documents, rapid depopulation is critical to help prevent or mitigate the spread of the disease by eliminating infected, exposed, or potentially exposed animals. However, USDA noted that during the 2014 outbreak, there were substantial delays in completing depopulation, with producers reporting that it took as long as 11 days to begin depopulation on many premises. USDA developed a corrective action program to identify, prioritize, and implement corrective actions that are intended to address the root causes of the lessons learned. The agency identified 308 corrective actions across 15 response areas and created a corrective action database to track the actions (see table 5 for the list of 15 response areas and examples of lessons learned and corrective actions associated with each area.) USDA prioritized the corrective actions according to their implications for future outbreaks and, for the highest priority actions, their time frame for completion. Specifically, USDA defined priority 1 corrective actions as those that would have immediate, critical implications for a future outbreak and that could be completed in less than 1 year; priority 2 actions as those that would have positive implications for a future outbreak or would have immediate, important implications but that may not be completed within 1 year; and priority 3 actions as those that are under consideration or that would have less critical implications for a future outbreak. According to our review and summary of USDA’s corrective action database, the agency has marked as completed about 70 percent of its corrective actions, including about 86 percent of the priority 1 actions (see table 6). As of January 2017, USDA did not have time frames for completing about 82 percent of the uncompleted priority 2 and priority 3 corrective actions. For example, USDA has not established a time frame for completing a priority 2 corrective action related to depopulation that calls for the agency to develop training materials for contracted responders to help ensure there are enough skilled personnel available for depopulation. This action is marked as “in progress” in the database. When we raised this issue during the course of our review, USDA officials responsible for the database said they are working with the groups in charge of taking corrective actions to identify time frames for the remaining priority 2 and priority 3 actions, but they said that it is complex and difficult to do so in light of other agency disease response activities. For example, they said that responding to an outbreak of New World screwworm in Florida in fall 2016 had caused the agency to pause some of its efforts to address corrective actions from the highly pathogenic avian influenza outbreaks. Nonetheless, agency officials acknowledged that time frames are important and said they will continue to develop them. USDA has taken steps to implement corrective actions, but it does not have plans to evaluate the extent to which completed corrective actions have effectively helped to resolve the problems the agency identified in its responses to the recent outbreaks. We have previously found that agencies may use evaluations to ascertain the success of corrective actions, and that a well-developed plan for conducting evaluations can help ensure that agencies obtain the information necessary to make effective program and policy decisions. An evaluation plan should include, among other things, evaluative criteria or comparisons, or how or on what basis program performance will be judged or evaluated. We also found that one approach agencies can use to evaluate changes in events that occur infrequently and unpredictably, such as disease outbreaks, is to conduct simulations or exercises to assess how well an agency’s plans anticipate the nature of its threats and vulnerabilities. Homeland Security Exercise and Evaluation Program guidance, which USDA officials told us they used in developing the corrective action program, states that agencies should put in place a system to test and validate corrective actions that have been implemented. This guidance states that agencies can identify the corrective actions that require validation and then conduct exercises to test whether those corrective actions have led to improvements. In our review of a nongeneralizable sample of 10 completed corrective actions designated as priority 1, it was unclear to what extent such actions were effective because, while USDA marked in its database that it had completed the corrective actions, it had not evaluated the extent to which these actions achieved the desired outcome. For example, one lesson learned that USDA identified was that many producers lack a strong culture of biosecurity. However, although USDA completed corrective actions associated with that lesson—creating a joint biosecurity website with the U.S. Poultry and Egg Association and putting greater emphasis on biosecurity in conferences with producers—it did not evaluate to what extent taking these actions created a strong culture of biosecurity among producers. In another lesson learned, USDA identified that states and producers encountered impediments in transporting bird carcasses to landfills, such as federal and state rules restricting the movement of bird carcasses along transportation routes in close proximity to other producers. USDA completed corrective actions associated with that lesson—providing guidance, training, and encouragement to states and producers to develop disposal plans—but did not evaluate to what extent taking these actions helped overcome the impediments observed. In addition, depopulation experts we interviewed raised concerns about whether USDA’s planned and completed corrective actions will effectively address the challenges with depopulation experienced during the 2014 and 2016 outbreaks. For example, these experts questioned whether a sufficient number of federal employees and contracted responders have been trained in using depopulation equipment to address a lesson learned that there were not enough skilled personnel available for depopulation during recent outbreaks. USDA documents state that the 2016 outbreak provided an opportunity to see that some of the corrective actions taken following the 2014 outbreak resulted in an improved response. For example, according to USDA’s after action report on the 2016 outbreak, the changes that USDA made to the process for compensating poultry producers for losses after the 2014 outbreak resulted in a faster and more efficient process during the 2016 outbreak. Nonetheless, USDA officials acknowledged that they are not certain whether completing other corrective actions will be sufficient to address the lessons learned from both outbreaks. They acknowledged the importance of evaluating corrective actions to determine whether additional steps are needed but said that the agency does not yet have plans to do so. Agency officials also told us that evaluating the effectiveness of these corrective actions will need to be a continuous process and should be considered within the broader context of USDA’s emergency preparedness for disease response. For example, USDA officials told us they intend to incorporate lessons learned and corrective actions from the agency’s response to the 2016 New World screwworm outbreak into the corrective action database for highly pathogenic avian influenza, so that the database becomes a broader tool that the agency can use to track corrective actions related to its overall disease response efforts. By developing a plan for evaluating completed corrective actions and, as part of this plan, considering whether any completed corrective actions require validation through simulations or exercises, USDA could better determine the effectiveness of these actions. On the basis of stakeholders’ views and our analysis of federal efforts to respond to outbreaks, we identified ongoing challenges and associated issues that federal agencies face in mitigating the potential harmful effects of avian influenza. These challenges are in protecting domesticated poultry from the threat of avian influenza that circulates naturally in wild birds and in relying on voluntary actions by a wide range of poultry producers to prevent poultry flocks from becoming infected. Federal agencies also face other issues associated with mitigating the potential harmful effects of avian influenza: the virus could infect poultry needed to produce eggs used in manufacturing critical human vaccines against pandemic influenza, and federal funding will soon be exhausted for a voluntary surveillance program that gathers information about the presence of influenza viruses in swine that could pose a threat to human health. We identified two ongoing challenges that federal agencies face in mitigating the potential harmful effects of avian influenza. First, federal agencies are challenged in protecting domesticated poultry from avian influenza because the disease naturally circulates in migratory birds, which may spread the disease. Second, federal efforts to prevent poultry flocks from infection are challenged because these efforts rely on voluntary biosecurity measures by poultry producers. Federal agencies face an ongoing challenge in protecting domesticated poultry from avian influenza because the disease naturally circulates in migratory birds, such as ducks and geese, which are hard to control and which may come into contact with poultry. Because of their migratory behavior, wild birds infected with avian influenza can spread the disease across long distances, including from as far away as Asia. Federal agencies and others are authorized under the Migratory Bird Treaty Act to sample ducks, geese, and other migratory birds to confirm the presence of an infectious disease, including influenza. According to Interior officials, the Act also provides agencies the authority to control migratory birds infected with avian influenza, but the officials noted that experience has shown that such efforts are ineffective. As reported in a text on avian influenza, humans have had and will continue to have minimal impact on control of low pathogenic avian influenza viruses in wild bird populations. Use of Vaccines to Eradicate Avian Influenza According to the U.S. Department of Agriculture (USDA), poultry vaccination has been part of control or eradication programs for avian influenza viruses in a number of countries. Effective vaccination can decrease transmission betw een animals by decreasing their susceptibility to infection and reducing the amount of virus an infected animal may shed. Vaccination has been used in some successful eradication campaigns for low pathogenic avian influenza outbreaks in the United States but never for highly pathogenic avian influenza outbreaks such as those that occurred in 2014 and 2016, according to USDA. Stakeholders w e interviewed characterized the decision to use poultry vaccines to control and eradicate the 2014 and 2016 outbreaks as having both scientific and economic components. From a scientific perspective, a vaccine needs to be able to protect against a specific influenza virus to be effective and merit use. In June 2015, USDA announced that the vaccines available at the time w ere not w ell matched to the virus that w as infecting poultry in numerous states. As an example of the economic implications of vaccines, USDA also announced in June 2015 that significant trading partners had indicated that, if USDA began vaccinating, they w ould ban all U.S. poultry and egg exports until they could complete a risk assessment. For these and other reasons, USDA decided against using vaccines. USDA’s Agricultural Research Service continues to develop enhanced vaccines for use in poultry against avian influenza. A 2014 USDA report concluded the poultry industry needs highly effective vaccines that can prevent transmission and that can be mass- delivered in w ater, in eggs, or in feed. Although federal agencies are unlikely to control avian influenza viruses in wild birds, they can monitor the viruses circulating in this population. Specifically, USDA, Interior’s U.S. Geological Survey and U.S. Fish and Wildlife Service, and state and tribal agencies collaborated on a national program for wild bird surveillance that sampled more than 283,000 wild birds from April 2006 through March 2011, when the program ended. The federal effort resumed in December 2014 in response to the outbreaks of highly pathogenic avian influenza on the West Coast of North America. In response to the outbreaks, personnel from USDA and Interior re-convened the Interagency Wild Bird Avian Influenza Steering Committee in January 2015. The Steering Committee developed a wild bird surveillance plan for avian influenzas that may pose a threat to human health or domestic poultry. The plan encourages federal and state agencies and others to use a variety of sampling methods to test live and dead wild birds for avian influenza. According to data recorded as of March 24, 2017, the surveillance program had collected test results from more than 88,000 wild birds since December 2014. The data for that time period show that the program detected 102 cases (about 0.12 percent) of highly pathogenic avian influenza from the same lineage that caused the 2014 and 2016 outbreaks in the United States (see app. II for details on the results of this surveillance program). On the other hand, monthly detection rates for low pathogenic avian influenza A viruses in wild birds were often above 10 percent for those tested. According to USDA and Interior officials, continued monitoring of wild birds will help identify the presence of avian influenza subtypes and help agencies to mitigate the persistent challenge that wild birds pose to domesticated poultry. The state veterinarians we interviewed from California, Indiana, Iowa, Minnesota, North Carolina, and Ohio generally agreed with the need for wildlife surveillance. Federal efforts to ensure routine biosecurity and prevent poultry flocks from becoming infected with avian influenza face an ongoing challenge because these efforts depend on voluntary actions by a wide range of poultry producers. While USDA’s approach to addressing this challenge varies for different types of poultry producers, such as those who manage large commercial operations and those who manage small backyard flocks, the approach primarily relies on using incentives and education to promote voluntary actions. According to USDA officials, state stakeholders, and poultry industry representatives we interviewed, sound biosecurity practices are important for all types of poultry facilities. This is also evident from the 2014 and 2016 outbreaks of highly pathogenic avian influenza, which affected large commercial and small backyard flocks, including turkeys, laying hens, and ducks. USDA found that lapses in routine preventative biosecurity allowed the initial introduction of disease and enabled it to spread from farm to farm. To gather information on biosecurity practices, USDA analyzed self- assessments completed by 850 poultry producers on the status of their biosecurity practices. While large producers generally indicated more frequently than small producers that they had certain practices in place, the nongeneralizable data showed that important practices were not consistently in place. For example, less than 60 percent of respondents had biosecurity officers or training in place. According to USDA’s self- assessment document, biosecurity officers and training could help reduce the threat of infection by improving biosecurity practices. Similarly, less than 60 percent of respondents had delineated lines of separation in their facilities to reduce the risk of contamination. Lines of separation are intended to reduce the risk that contaminated materials come into contact with poultry. In addition, less than 60 percent of respondents said that they had practices in place for personnel to shower or change into clean clothes immediately prior to arriving at a poultry site, or upon arrival, to reduce the risk of introducing an avian influenza virus. While USDA can impose biosecurity measures during its response to an emergency, the agency does not have the authority to require producers to routinely employ preventative biosecurity measures. Instead, USDA relies on producers to take voluntary action to prevent the introduction of avian influenza and other diseases. Toward that end, USDA recently initiated two interrelated efforts—independent of the corrective action program described above—that may help overcome this challenge among commercial farms. In addition, USDA has continued its efforts, through public education and outreach, to encourage backyard poultry farmers to practice biosecurity. USDA’s first initiative to improve biosecurity involves linking producers’ eligibility for indemnity payments to a biosecurity plan. Specifically, USDA issued an interim rule in February 2016 requiring large poultry producers seeking indemnity payments in the future to provide a statement that, at the time highly pathogenic avian influenza was detected in their facilities, they had in place and were following a written biosecurity plan to address the potential spread of the virus. According to USDA officials, this regulatory change provides a strong incentive to members of the poultry industry to have a biosecurity plan in place. As of February 2017, USDA continues to operate under the interim rule issued in February 2016. The second and related initiative concerns changes to the National Poultry Improvement Plan. According to USDA officials, poultry industry representatives who commented on the interim indemnity rulemaking suggested that the agency use the National Poultry Improvement Plan to promote biosecurity. The improvement plan is a voluntary program administered by USDA under which participating commercial poultry flocks are tested to ensure they are free from diseases, including H5 and H7 subtypes of avian influenza. If a flock tests negative for avian influenza, USDA certifies to trading partners and others that the flock is free of the disease. In September 2016, delegates to the program—who included poultry industry representatives—gave interim approval to add a set of 14 biosecurity principles to the plan’s national program standards. The biosecurity principles call for, among other things, training poultry producers about biosecurity; taking steps to protect against infection from wild birds, rodents, and insects; cleaning vehicles and equipment to reduce risk; and managing manure and litter to prevent the exposure of susceptible poultry to disease agents. Those principles would apply to the poultry producers who participate in the program; according to USDA officials, most commercial poultry producers participate. According to USDA officials, these initiatives will encourage commercial producers to adopt preventative biosecurity measures. Commercial poultry flocks may also be raised outdoors and thus are at greater risk of contact with wild birds infected with avian influenza. For example, turkeys and chickens must have access to outdoor space to be certified by USDA as organically raised. Organically raised poultry are a rapidly growing segment of the industry, according to USDA documents. Stakeholders told us that they were concerned that producers of organically raised poultry do not have to follow the same biosecurity principle—namely, keeping birds indoors—that producers of conventional poultry are encouraged to follow. USDA has acknowledged that organically raised birds are at a greater risk than birds raised indoors. USDA’s policy is that if it is determined that temporary confinement of birds is needed to protect the health, safety, and welfare of organic flocks, then producers and certifiers may work together to determine an appropriate method and duration of confinement of such flocks without a loss of organic certification. Stakeholders we interviewed told us backyard poultry flocks are a concern for contracting and spreading avian influenzas to commercial poultry because these flocks are raised outdoors and are more likely to come into contact with wild birds. According to USDA’s website, raising backyard poultry is a growing trend across the United States. USDA manages the “Biosecurity for Birds” campaign to help raise awareness among backyard, hobby, and pet bird owners about the risks of avian influenza. The biosecurity principles that USDA promotes to backyard poultry producers include separating the domesticated flock from other birds, including game birds and wild waterfowl, because the latter can carry disease. According to an agency document, USDA works cooperatively with state animal health officials and the poultry industry to look for disease in breeding flocks, in backyard poultry, and at live bird markets, livestock auctions, poultry dealer locations, and small bird sales, fairs, and shows. We identified two other issues that federal agencies face associated with mitigating the potential harmful effects of avian influenza. First, outbreaks of the disease threaten the poultry that produce the eggs used in the production of human pandemic influenza vaccine. Second, funding for a voluntary surveillance program that gathers data on influenza A viruses in swine that could pose a threat to human health will be exhausted in fiscal year 2017. Protecting the chickens that lay the eggs needed to produce human pandemic influenza vaccines is an issue for federal agencies because these birds, like others, are susceptible to avian influenza. HHS has an obligation under the National Strategy for Pandemic Influenza to promote capabilities that assure a pandemic vaccine can be produced at a U.S.- licensed influenza vaccine facility at any time of the year, without limitations imposed by the availability of essential supplies. Pandemic influenza vaccines may be manufactured using several technologies. To date, the most commonly used technology has relied on fertilized eggs as a raw material. According to HHS officials, 90 to 95 percent of the current national stockpile of pandemic influenza vaccines is derived from eggs. According to an HHS official, the agency has a stockpile of egg- based and cell-based pre-pandemic influenza vaccines supplied by four companies. Of the four companies, however, only one has an egg-based vaccine manufacturing facility in the United States. If an influenza pandemic is declared, according to this official, the U.S. government may not be able to rely on foreign countries to allow exports of pandemic vaccine because each country will likely prioritize those vaccines for its own population. Therefore, the U.S. government considers the one U.S.- based company as the only dependable manufacturer for producing egg- based vaccine for rapid pandemic mitigation. This company contracts with suppliers to provide it with the necessary egg supply. HHS officials and company representatives told us that the company has an egg production network that includes flocks located on numerous farms. According to company officials, protecting the company’s current network of egg suppliers is critical because the company cannot rely on other suppliers for eggs if its own network is compromised; the officials told us the company would not be able to make vaccine with eggs raised outside its control. According to HHS officials, the agency recognizes that avian influenza poses a risk to the production of pandemic influenza vaccines. To address that risk, HHS has contracted with the company to protect the egg supply chain and ensure a year-round supply of vaccine-quality fertilized eggs for the company to use in its vaccine manufacturing process. HHS awarded the current 3-year, $42 million contract for a year- round supply of eggs in September 2014. The contract requires that the company have a risk management plan; the company’s plan contains both a physical security program and a biosecurity program to provide protection against man-made and natural threats. HHS officials said they are confident that the company’s biosecurity program is sound. According to company representatives, the company mitigates risks by limiting the density of the birds on each farm and by using farms that are not in close proximity. In addition, company employees routinely audit the flocks and incubation facilities, and the company periodically tests the flocks of layer hens for avian influenza using USDA’s National Poultry Improvement Plan testing procedures. Furthermore, according to HHS officials, the agency conducts annual security audits of a portion of the facilities in the company’s network. According to company representatives, the company has standard operating procedures for biosecurity in its network of egg suppliers that are based on state department of agriculture guidelines. Company representatives said that because the company contracts with its suppliers and can require specific conditions, it has more control over what is done on the farms and in the incubation facilities than it does with farms that only comply with either USDA or state agriculture department requirements. While the 2014 and 2016 outbreaks did not affect this egg supply, a previous outbreak of highly pathogenic avian influenza caused the deaths of laying hens and reduced the supply of eggs used to produce human vaccines by about 50 percent. HHS has sought to diversify vaccine production through technologies that are not egg-based. Specifically, HHS has promoted technologies known as cell-based and recombinant technologies to produce vaccine. According to agency officials, these technologies will help offset the risk avian influenza poses to vaccine production. We have reported separately on federal efforts to diversify the pandemic vaccine supply. According to HHS’s website, three Centers for Innovation in Advanced Development and Manufacturing—in Maryland, North Carolina, and Texas—will provide a significant domestic infrastructure in the United States capable of producing medical countermeasures to protect Americans from the health impacts of bioterrorism as well as pandemic influenza and other diseases. However, the centers are not yet able to manufacture the contracted quantity of pandemic influenza vaccine. According to HHS’s Office of the Assistant Secretary for Preparedness and Response, as of February 2017, it was yet to be determined when the three CIADMs would be fully operational, but contractor officials indicated that one of the three is expected to become fully operational in 2017. USDA and HHS have collaborated to monitor swine for influenza A viruses because swine may act as a “mixing vessel” in which influenza viruses recombine to pose new threats to human health. However, the agencies face the issue that funding for a voluntary surveillance program will be exhausted in fiscal year 2017. According to HHS officials, this surveillance program is the only federal source of data for understanding the types of influenza circulating in swine. Because influenza is endemic in swine worldwide, swine producers are not required to report the disease to USDA, and USDA is not required to report swine influenza to the World Organisation for Animal Health. However, since 2009, when H1N1 swine-origin influenza caused a global human pandemic, USDA has used funding from HHS to collect voluntary data from the U.S. swine industry on the incidence of swine influenza. As we reported in May 2013, there are limitations in the reliability of the data collected by this voluntary program; in particular, it may not accurately represent all of the conditions circulating across the country. Nevertheless, this program has provided useful data on the presence of various subtypes of influenza virus in swine herds, according to HHS and USDA officials. Moreover, representatives from the pork industry we interviewed stated the surveillance data are beneficial to both public and animal health. However, according to USDA officials, funding for the swine surveillance program is expected to be fully expended in fiscal year 2017. USDA officials said that the agency will once again seek additional funding for the program in fiscal year 2018 and beyond, through appropriated funding, but that funding beyond fiscal year 2017 is uncertain. In addition, the U.S. Animal Health Association provided support for the program’s continuation through a 2016 resolution asking Congress to appropriate funding for the swine surveillance program; furthermore, according to its representative, the National Pork Producers Council has advocated for continued funding for the program. According to HHS officials, the agency will continue to be supportive of USDA’s efforts to continue the program. It is too early to say whether USDA will continue to gather data on influenza in swine beyond fiscal year 2017. The federal government has taken important steps to mitigate the significant risks posed by avian influenza to the health of humans, animals, and the economy. However, experience in the United States and around the world has shown that it is challenging to protect domesticated poultry from infection and control the disease when it does strike. USDA proactively identified numerous lessons learned, across a wide range of response areas, from the 2014 and 2016 outbreaks of avian influenza, and it identified more than 300 associated corrective actions. USDA has marked as completed about 70 percent of these actions, but it does not have plans for evaluating the extent to which its completed corrective actions have effectively helped to resolve the problems the agency identified in its responses to the 2014 and 2016 outbreaks. By developing a plan for evaluating completed corrective actions and, as part of this plan, considering whether any completed corrective actions require validation through simulations or exercises, USDA could better assess the effectiveness of these actions. This is particularly important in light of new outbreaks among commercial poultry in 2017 that continue to challenge the nation’s efforts to control this devastating disease. We recommend that the Secretary of Agriculture direct the Administrator of the Animal and Plant Health Inspection Service to develop a plan for evaluating completed corrective actions to determine their effectiveness and, as appropriate, consider whether any completed corrective actions require validation through simulations or exercises. We provided a draft of this report to USDA, HHS, and Interior for review and comment. USDA provided written comments on the draft, which are presented in appendix III, and provided technical comments, which we incorporated as appropriate. USDA agreed with our recommendation. HHS and Interior did not provide written comments but provided technical comments, which we incorporated as appropriate. In its written comments, USDA said that APHIS agreed with our recommendation to develop a plan for evaluating completed corrective actions to determine their effectiveness. Further, USDA said that APHIS will incorporate simulations and exercises in its plan and that, in the event of an actual outbreak, APHIS will evaluate the effectiveness of the response through an after action report. Finally, USDA said that APHIS will continually review the criteria and hierarchy of corrective actions, both completed and ongoing, with respect to avian influenza policies, emergency management activities, and critical communications with states, tribes, poultry producers, and poultry industry partners. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Health and Human Services, the Secretary of the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Numerous federal agencies have responsibilities related to reducing the risks posed by avian influenza to human health, animal health, and the economy. Table 7 provides a summary of those responsibilities for agencies within the U.S. Department of Agriculture, the Department of Health and Human Services, and the Department of the Interior. In response to the outbreak of highly pathogenic avian influenza in December 2014, personnel from the U.S. Department of Agriculture (USDA) and the Department of the Interior re-convened the Interagency Wild Bird Avian Influenza Steering Committee in January 2015. The steering committee developed a wild bird surveillance plan for avian influenzas that may pose a threat to human health or domestic poultry. The plan encourages federal and state agencies and others to use a variety of sampling methods to test live and dead wild birds to detect both low pathogenic and highly pathogenic avian influenza. According to data recorded as of March 24, 2017, the surveillance program had collected test results from more than 88,000 wild birds since December 2014. The number of monthly highly pathogenic avian influenza detections was highest during the period from December 2014 through June 2015 before declining during the period from July 2015 through March 24, 2017, despite an increase in testing over this period of time. In total, the program detected highly pathogenic avian influenza in 102 birds, or 0.12 percent of those tested. (See table 8 for a summary of the monitoring data.) The surveillance program also detected low pathogenic influenza A virus in some wild birds; program data show that the percentage of wild duck samples that tested positive for low pathogenic influenza A virus in each month ranged from about 7 percent to about 30 percent in 2015 and 2016 (data not shown in table 8). The state veterinarians we interviewed from six states (California, Indiana, Iowa, Minnesota, North Carolina, and Ohio) generally agreed with the need for wildlife surveillance. At the same time, while monitoring can serve as an early warning system to alert poultry owners and public health agencies, among others, of the presence of influenza A viruses in wild birds, it cannot eliminate wild birds as potential sources of the virus. In addition to the individual named above, Mary Denigan-Macauley (Assistant Director), Kevin Bray, Ross Campbell, Barbara El Osta, Kevin R. Fish, Katherine Killebrew, Erik Kjeldgaard, Cynthia Norris, and Amber Sinclair made key contributions to this report. Ashley Grant, Sara Sullivan, Kiki Theodoropoulos, and Rajneesh Kumar Verma also made important contributions to this report. Biodefense: The Nation Faces Multiple Challenges in Building and Maintaining Biodefense and Biosurveillance. GAO-16-547T. Washington, D.C: April 14, 2016. Emerging Animal Diseases: Actions Needed to Better Position USDA to Address Future Risks. GAO-16-132. Washington, D.C.: December 15, 2015. Biosurveillance: Challenges and Options for the National Biosurveillance Integration Center. GAO-15-793. Washington, D.C: September 24, 2015. National Preparedness: HHS Has Funded Flexible Manufacturing Activities for Medical Countermeasures, but It Is Too Soon to Assess Their Effect. GAO-14-329. Washington, D.C.: March 31, 2014. National Preparedness: HHS is Monitoring the Progress of Its Medical Countermeasure Efforts but Has Not Provided Previously Recommended Spending Estimates. GAO-14-90. Washington, D.C.: December 27, 2013. Homeland Security: An Overall Strategy Is Needed to Strengthen Disease Surveillance in Livestock and Poultry. GAO-13-424. Washington, D.C.: May 21, 2013. Influenza: Progress Made in Responding to Seasonal and Pandemic Outbreaks. GAO-13-374T. Washington, D.C.: February 13, 2013. National Preparedness: Improvements Needed for Acquiring Medical Countermeasures to Threats from Terrorism and Other Sources. GAO-12-121. Washington, D.C.: October 26, 2011. Influenza Pandemic: Lessons from the H1N1 Pandemic Should Be Incorporated into Future Planning. GAO-11-632. Washington, D.C.: June 27, 2011. Influenza Vaccine: Federal Investments in Alternative Technologies and Challenges to Development and Licensure. GAO-11-435. Washington, D.C.: June 27, 2011. National Preparedness: DHS and HHS Can Further Strengthen Coordination for Chemical, Biological, Radiological, and Nuclear Risk Assessments. GAO-11-606. Washington, D.C.: June 21, 2011. Public Health Preparedness: Developing and Acquiring Medical Countermeasures Against Chemical, Biological, Radiological, and Nuclear Agents. GAO-11-567T. Washington, D.C.: April 13, 2011. Influenza Pandemic: Monitoring and Assessing the Status of the National Pandemic Implementation Plan Needs Improvement. GAO-10-73. Washington, D.C.: November 24, 2009. Influenza Pandemic: Sustaining Focus on the Nation’s Planning and Preparedness Efforts. GAO-09-334. Washington, D.C.: February 26, 2009. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response. GAO-07-652. Washington, D.C.: June 11, 2007.
Avian influenza is an extremely infectious and potentially fatal disease in poultry. In 2014 and 2016, two outbreaks of avian influenza led to the deaths of millions of poultry in 15 states and prompted emergency spending to control the disease. While the health risk to humans is low, humans have been infected with these viruses, sometimes fatally. A spike in fatal human infections in Asia began in late 2016. GAO was asked to review several issues related to avian influenza. This report examines (1) how outbreaks of avian influenza have affected human health, animal health, and the U.S. economy, (2) the extent to which USDA has taken actions to address any lessons learned from its responses to the outbreaks in 2014 and 2016, and how it plans to evaluate the actions' effectiveness, and (3) ongoing challenges and associated issues, if any, federal agencies face in their efforts to mitigate the potential harmful effects of avian influenza. GAO reviewed global and domestic data on the effects of avian influenza and USDA reports and corrective action data associated with its responses to the recent outbreaks, and interviewed federal officials and stakeholders from state agencies and the poultry industry. When avian influenza outbreaks occur, they can have significant effects on human and animal health and the U.S. economy. With regard to human health, avian influenza rarely affects humans, but the World Health Organization estimates that two particular types of the virus have caused more than 2,100 human infections and more than 800 deaths since 1997, primarily in Asia and the Middle East. With regard to animal health, avian influenza outbreaks can lead to large numbers of poultry deaths as a result of efforts to control and prevent the spread of the disease. For example, from December 2014 to June 2015, more than 50 million birds were destroyed in the largest outbreak in U.S. history. The effect of avian influenza on the health of other animal species varies. Swine are susceptible to both avian and human influenza viruses that, if mixed, could create a new virus to which humans are vulnerable. An outbreak can also have significant economic consequences; for example, the economic impacts of the 2014 outbreak in the United States have been estimated to range from $1.0 to $3.3 billion. USDA identified 15 areas with lessons learned from its responses to the 2014 and 2016 outbreaks of avian influenza and 308 associated corrective actions. For example, one lesson learned in the area of depopulation (mass culling of flocks) is that there were not enough skilled personnel available for depopulating infected poultry, leading to delays and possibly increasing the spread of disease. USDA has identified as completed about 70 percent of the 308 corrective actions to address all of the lessons learned. However, the agency has not evaluated the extent to which completed corrective actions—such as encouraging states to form depopulation teams—have helped resolve the problems identified, and it does not have plans for doing so. GAO has previously found that agencies may use evaluations to ascertain the success of corrective actions, and that a well-developed plan for conducting evaluations can help ensure that agencies obtain the information necessary to make effective program and policy decisions. Such a plan would help USDA ascertain the effectiveness of the actions it took to resolve problems identified during recent outbreaks. On the basis of GAO's analysis of federal efforts to respond to outbreaks and of stakeholders' views, GAO identified ongoing challenges and associated issues that federal agencies face in mitigating the potential harmful effects of avian influenza. For example: One challenge is that federal efforts to protect poultry from avian influenza rely on voluntary actions by a wide range of poultry producers to take routine preventative measures—known as biosecurity— to protect their flocks from disease. USDA has two major initiatives under way to encourage improvements to biosecurity. An associated issue that federal agencies face is that the chickens used to produce the eggs needed to manufacture critical human influenza vaccine are susceptible to influenza outbreaks. The Department of Health and Human Services is supporting the development of new vaccine manufacturing technologies to reduce reliance on eggs. GAO recommends that USDA develop a plan for evaluating the effectiveness of the corrective actions it has taken. USDA agreed with GAO's recommendation.
In 1947, the United Nations (U.N.) created the Trust Territory of the Pacific Islands. The United States entered into a trusteeship with the U.N. Security Council and became the administering authority of the current islands of the FSM and the RMI. The United States administered the islands under this trusteeship until 1986, when it entered into a Compact of Free Association with the FSM and the RMI, both of which are located in the Pacific Ocean. The original Compact represented both a continuation of U.S. rights and obligations first embodied in the U.N. trusteeship agreement and a new phase in the unique and special relationship that had existed between the United States and these island nations. It also provided a framework for the United States to work toward achieving its three main goals of (1) securing self-government for the FSM and the RMI, (2) assisting the FSM and the RMI in their efforts to advance economic development and self-sufficiency, and (3) ensuring certain national security rights for all of the parties. The Department of the Interior’s Office of Insular Affairs was responsible for disbursing and monitoring Compact funds. For the 15-year period from 1987 through 2001, it provided funding at levels that decreased every 5 years. For 2002 and 2003, while negotiations to renew the expiring Compact provisions were ongoing, funding levels increased to equal an average of the funding provided during the previous 15 years. For 1987 through 2003, total U.S. assistance to the FSM and the RMI to support economic development is estimated, based on Interior data, to be about $2.1 billion. In addition, the Compact identified several services that U.S. agencies would supply to the FSM and the RMI and further stated that these agencies could provide direct program assistance as authorized by the Congress. This assistance included grants, loans, and technical assistance that, for fiscal years 1987 through 2001, totaled about $700 million from 19 U.S. agencies. The Department of the Interior was responsible for supervising, coordinating, and monitoring program assistance, while the Department of State was responsible for directing and coordinating all U.S. government employees in foreign countries, except those under the command of U.S. area military commanders. In 2000, we reported that one tool that should be used for ensuring accountability over Compact assistance was the annual audits required by the Compact. FPAs for implementing the Compact required that financial and compliance audits be conducted in accordance with the provisions of the Single Audit Act. This act is intended to, among other things, promote sound financial management, including effective internal controls, with respect to the use of federal awards. Entities that expend $300,000 or more in federal awards in a year are required to comply with act’s requirements. Further, the act requires entities to (1) maintain internal control over federal programs, (2) comply with laws, regulations, and the provisions of contracts or grant agreements, (3) prepare appropriate financial statements, including a Schedule of Expenditures of Federal Awards, (4) ensure that the required audits are properly performed and submitted when due, and (5) follow up and take corrective actions on audit findings. Deloitte Touche Tohmatsu, an independent public accounting firm, conducted the 30 single audits that we reviewed for the FSM; the 4 FSM states of Chuuk, Kosrae, Pohnpei, and Yap; and the RMI. Our objective was to review possible FSM and RMI misuse of Compact funds. One source of this type of information is the annual single audits that the fiscal procedures agreement for the implementation of the Compact requires the FSM and the RMI to obtain. We obtained the single audit reports for the years 1996 through 2000, the most recent single audit reports available at the time of our review, for the national government of the FSM; the FSM state governments of Chuuk, Kosrae, Pohnpei and Yap; and the national government of the RMI. In total, this amounted to 30 single audit reports representing 5 years, a period that we considered sufficient for identifying misuse of funds and common or persistent compliance and financial management problems involving Compact funds. While these reports did not specifically identify any findings as instances of misuse of Compact funds, they did identify problems that could leave Compact funds susceptible to misuse, including poor control over cash and equipment. We reviewed each report to identify and categorize the audit findings relevant to the Compact, paying particular attention to those involving assets or other financial accounts (i.e., cash and equipment) that we considered particularly susceptible to misuse. (We did not independently assess the quality of these audits or the reliability of the audit finding information. However, based on the fact that the audited entities developed corrective action plans for about 93 percent of the findings contained in the audit reports, we concluded that the audit findings provide an accurate representation of the problems reported.) We also reviewed the reports to identify auditee responses to the audit findings and their corrective action plans. These plans indicate auditee agreement or disagreement with the audit findings and the actions they planned to take or had taken to fix the findings. In addition, we reviewed the audit findings to determine if they recurred in successive single audits over the 5-year period. We completed our review of each single audit report by identifying and categorizing the auditor’s opinions on the financial statements and the Schedules of Expenditures of Federal Awards. In responding to our previous review of the Compact program, Interior officials expressed concerns about the U.S. government’s limited ability to enforce accountability over Compact funds due to certain provisions of the original Compact and the related FPA. In light of these concerns, we reviewed the amended Compacts and related FPAs to determine if they included measures that could increase accountability over Compact funds. In addition, we supplemented our review of these documents with a discussion about the amended Compacts with Interior officials to determine if the new provisions addressed their prior concerns about limited actions available to them for holding the FSM and the RMI accountable. Interior’s Compact-related expenditures represented about 80 percent of the total expenditures of U.S. assistance made by the FSM, the 4 FSM states, and the RMI during the 5-year period. Because of the relatively small amount of funding from other federal agencies at these recipients, we did not discuss finding resolution with representatives of those agencies. We conducted our audit from August 2002 through May 2003 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the governments of the FSM and the RMI and the Secretary of the Interior. Their comments are discussed in the section entitled Government and Agency Comments and Our Evaluation and are reprinted in appendixes I, II, and III. Further, we considered all comments and made changes to the report, as appropriate. Single audits of the FSM, the four FSM states, and the RMI identified pervasive audit findings involving noncompliance with Compact requirements and financial statement problems in areas that we consider highly susceptible to misuse. In addition, the independent auditor performing the single audits issued qualified opinions or disclaimers of opinion on the financial statements in all 30 single audit reports reviewed and for 60 percent of the Schedules of Expenditures of Federal Awards. Taken together, these findings and opinions demonstrate that the FSM, the four FSM states, and the RMI did not provide reasonable accountability over Compact funds and assurance that these funds were used for their intended purposes. The 30 single audit reports that we examined contained about 90 audit findings for each year of the 5-year period covered by our review. In total, they contained 458 audit findings relevant to Compact funds and significant numbers of findings for each of the auditees for which we reviewed single audit reports. Further, successive single audits during the 5-year period contained recurring audit findings despite corrective action time frames established by the auditees and our conclusion that few of the findings involved significant issues, such as implementing an accounting system, that could be expected to require more than 2 years to correct. Figure 1 shows the number of audit findings reported annually from 1996 through 2000. It demonstrates that the auditors performing the 30 single audits in our review identified a significant number of audit findings both in total and in each year of the 5-year period of our review. In addition, the 30 audit reports identified a significant number of audit findings for each of the auditees. Figure 2 shows the percentages of the 458 audit findings related to Compact funds for each auditee. Office of Management and Budget (OMB) Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations, establishes policies for federal agency use in implementing the Single Audit Act, as amended, and provides an administrative foundation for consistent and uniform audit requirements for nonfederal entities that administer federal awards. In part, the circular requires the auditee to follow up and take corrective actions on audit findings identified by the single audits. It clarifies this requirement by stating that, at the completion of the single audit, the auditee shall prepare a corrective action plan (CAP) to address each audit finding included in the current year auditor’s report. If the auditee does not agree with the audit findings or believes corrective action is not required, the CAP is to include an explanation of and justification for this position. Based on our review of the audit reports, the FSM, the four FSM states, and the RMI generally fulfilled their responsibility to either prepare a CAP or indicate their disagreement with the audit finding and provide reasons for their disagreement. As figure 3 shows, they prepared CAPs for 93 percent of the audit findings identified by the single audits in our review and indicated their disagreement and reasons for this disagreement for 5 percent of the findings. Our review of these CAPs showed that about 33 percent (138) included anticipated completion dates, and, of these plans, only 4 percent (16) indicated that the planned corrective actions would require more than 2 years to complete. Based on a review of the CAPs that did not include anticipated completion dates (287), we concluded that, with a few exceptions,the problems addressed by these plans could be corrected within a year. For example, Financial Status Reports submitted to the grantor agencies for fiscal year 2000 were not available during the single audit of the RMI. The auditors recommended that an adequate filing system, including the maintenance of Financial Status Reports, be maintained for all federal awards. The CAP called for the Ministry of Finance to ensure that an adequate filing system was in place and to review status reports periodically. Further analysis of the findings revealed that successive single audits identified recurring audit findings over the 5-year period despite the time frames identified in the auditee-prepared CAPs or our estimate of the amount of time corrective action should take. As figure 4 shows, many audit findings that were identified in more than one single audit report recurred in 3 or more years over the 5-year period. The percentage of each auditee’s single audit findings that recurred 3 or more years over the 5-year period of our review ranged from RMI’s high of 69 percent to a low of 17 percent for the FSM. The auditors categorized the audit findings related to the Compact into three areas—federal award findings, local findings, and financial statement findings. Upon further review, we determined that 117 audit findings that the auditors categorized as federal award findings or local findings discussed problems related to compliance with Compact requirements, and the remaining 341 discussed financial statement problems. The auditors who performed these single audits qualified or disclaimed their opinion on all of the financial statements and about 60 percent of the Schedules of Expenditures of Federal Awards generally because the auditees did not provide them with all needed financial statements or documentation to support transactions recorded in their books. Taken together, the compliance and financial statement findings and audit opinions demonstrate poor accountability over Compact funds and an inability on the part of the entities involved to provide assurances that all program funds are used as intended. They highlight the need for a stronger control environment and greater efforts to implement control activities that strengthen accountability and help ensure that Compact funds are used for program purposes. Compliance requirements for federal assistance set forth what is to be done, who is to do it, the purpose to be achieved, the population to be served, and how much can be spent in certain areas. OMB’s Single Audit Act guidance includes 15 compliance categoriesused by auditors to report on compliance-related findings. Our analysis of the compliance categories the auditors cited for the Compact-related audit findings showed that over half of the audit findings related to two categories—allowable costs/cost principles and equipment and real property management. The first category, allowable costs/cost principles, specifies the allowability of costs under federal awards. For example, expenditures for 17 types of projects or activities were allowable under the original Compact capital account, including construction or major repair of capital infrastructure, public and private sector projects, training activities, and debt service. The second category, equipment and real property management, specifies how federal award recipients should use, manage, and dispose of equipment and real property. The following examples illustrate the types of audit findings that the auditors categorized into the 15 areas. Kosrae advanced $93,000 in Compact Health and Medical Program funds to off-island health providers for medical referrals. The advances were immediately expensed without reference to the specific medical expenses actually incurred. This is an example of a compliance finding related to allowable costs/cost principles. Kosrae incurred over $274,000 in expenditures of Compact Capital funds that lacked proper supporting vendor’s invoices. This is an example of a compliance finding related to allowable costs/cost principles. Chuuk transferred about $169,000 in Compact Capital funds to entities (subrecipients) that have not been audited or reviewed for compliance with Compact requirements. This is an example of a compliance finding related to subrecipient monitoring. As mentioned earlier, the auditors performing the single audits also categorized findings as financial statement findings. The audit findings for this category related to the reliability of financial reporting and involved recording, processing, summarizing, and reporting financial data. Unlike the findings that related to compliance with Compact requirements, the auditors did not tie the financial statement findings to the categories contained in the Single Audit Act guidance. Our review of these findings identified 101 financial statement findings involving problems with assets or accounts that we consider susceptible to misuse. The following examples illustrate financial statement findings related to assets or accounts that we consider susceptible to misuse. Yap’s three major bank accounts (general checking, savings, and payroll) were not reconciled to bank records at the end of fiscal year 1999. Differences between the amounts shown for these cash accounts in Yap’s books and the bank records amounted to over $150,000. The auditors identified this lack of bank reconciliations as an internal control weakness in Yap’s single audit reports for the years 1995 through 1999. A record being out of balance is a risk factor auditors use to identify the possibility of fraud. This is an example of a cash problem. The RMI had not conducted a physical inventory or updated property records for equipment and real property. As of September 30, 2000, RMI reported that its equipment was worth about $11 million, but the auditor could not substantiate this amount due to inadequate records. The auditor identified a lack of updated property records for the General Fixed Asset Group in single audit reports for the years 1988 through 2000. Missing documents, such as the property records for equipment in this example, are a risk factor used by auditors to identify the possibility of fraud. This is an example of an equipment problem. The 30 single audit reports included auditor opinions or disclaimers of opinion on the financial statements and Schedules of Expenditures of Federal Awards for the FSM, the four FSM states, and the RMI. The financial statements reflect a federal award recipient’s financial position, results of operations or changes in net assets, and, where appropriate, cash flows for the year. The Schedules of Expenditures of Federal Awards show the amount of expenditures for each federal award program during the year. If the auditors are not able to perform all of the procedures necessary to complete an audit, they consider the audit scope to be limited or restricted. Scope limitations may result from the timing of the audit work, the inability to obtain sufficient evidence, or inadequate accounting records. If the audit scope is limited, the auditors must make a professional judgment about whether to qualify or disclaim an opinion. A qualified opinion states that, except for the matter to which the qualification relates, the financial statements are fairly presented in accordance with generally accepted accounting principles. In a disclaimer of opinion, the scope limitation is serious enough that the auditor does not express an opinion. The auditor’s opinions on the financial statements and Schedules of Expenditures of Federal Awards for the 30 single audits in our review reveal overall poor financial management. The auditors performing these single audits qualified or disclaimed their opinions on all of the financial statements and about 60 percent of the Schedules of Expenditures of Federal Awards generally because they were unable to obtain sufficient evidence or adequate accounting records. For example, the auditor qualified its opinion on the FSM’s financial statements for the year 2000 because of the auditor’s inability to ensure the propriety of receivables from other governments and missing financial statements for a component unit. In another example, the auditor did not express an opinion on Chuuk’s financial statements for the year 1999 because of inadequacies in the accounting records and internal controls, incomplete financial statements for component units, and its inability to obtain audited financial statements supporting investments. The significant number of audit findings involving FSM and RMI noncompliance with Compact requirements and weaknesses in their financial management systems, along with auditor qualified opinions or disclaimers of opinion on financial statements, echo the control and accountability issues that we identified in our earlier reports on Compact assistance. Further, the pervasive and recurring nature of the compliance and financial statement problems highlights (1) the need for stronger control environments that will help ensure that Compact funds are used for program purposes and (2) the limited progress made during the 5-year period of our review in establishing accountability in the FSM, the four FSM states, and the RMI that would provide reasonable assurance that Compact funds are used for their intended purposes. In responding to our previous reviews of the original Compact program, Interior officials expressed concerns about the U.S. government’s limited ability to enforce accountability over Compact funds due to certain provisions of the original Compact and the related FPA. According to these officials, administrators have been reluctant to commit oversight resources to the Compact when no enforcement mechanisms exist due to these provisions. The United States and the FSM signed an amended Compact in May 2003. The United States and the RMI signed an amended Compact in April 2003. These amended Compacts are awaiting legislative approval in the United States, the FSM, and the RMI. They contain strengthened reporting and monitoring measures over the original Compact that could improve accountability over Compact assistance, if diligently implemented. According to Interior officials, the FPA in effect during the period of our review created a financial management regimen unique in federal practice. They explained that it was negotiated to give the FSM and the RMI governments clear control over Compact funding and to limit the U.S. government’s authority to intervene in spending decisions and, most important, to withhold payments if the terms and conditions of funding were violated. More specifically, these officials explained that the expiring FPAs lacked basic elements of federal grant management practice similar to those in OMB Circular A-102, Grants and Cooperative Agreements with State and Local Governments, which requires standard procurement practices and cost principles. They elaborated that, when coupled with the full faith and credit provisions of the Compact, this lack of standards limited the U.S. government’s response to mismanagement. In summing up, they stated that while additional personnel and funding could have been committed to Compact oversight, the United States would still have had almost no ability to influence fiscal decisions made by the FSM or the RMI. The amended Compacts could potentially cost the U.S. government about $6.6 billion in new assistance. Of this amount, $3.5 billion would cover payments over a 20-year period (2004-23), while $3.1 billion represents payments for U.S. military access to the Kwajalein Atoll in the RMI for the years 2024 through 2086. The amended Compacts contain strengthened reporting and monitoring measures that could improve accountability over Compact assistance, if diligently implemented. In addition, the Department of the Interior has taken actions to increase resources dedicated to monitoring and oversight of Compact funds. The following are amended Compact and related FPA measures that represent changes from the prior Compact and FPAs. In 2000, we reported that Compact funds were placed in a general government fund and commingled with other revenues and, therefore, could not be further tracked. In addition, some Compact assistance was only traced at a high level with few details readily available regarding final use. The amended Compacts and FPAs include requirements that should address these accountability concerns. Specifically, they require fiscal control and accounting procedures sufficient to permit (1) preparation of required reports and (2) tracing of funds to a level of expenditures adequate to establish that such funds have been used in compliance with applicable requirements. Further, the amended Compacts specify standards for the financial management systems used by the FSM and the RMI. For example, these systems should maintain effective controls to safeguard assets and ensure that they are used solely for authorized purposes. The new FPAs would establish a joint economic management committee for the FSM and the RMI that would meet at least once a year. The committee would be composed of three U.S. appointed members, including the chairman, and two members appointed, as appropriate, by either the FSM or the RMI. The committee’s duties would include (1) reviewing planning documents and evaluating island government progress to foster economic advancement and budgetary self-reliance, (2) consulting with program and service providers and other bilateral and multilateral partners to coordinate or monitor the use of development assistance, (3) reviewing audits, (4) reviewing performance outcomes in relation to the previous year’s grant funding level, terms, and conditions, and (5) reviewing and approving grant allocations (which would be binding) and performance objectives for the upcoming year. Grant conditions normally applicable to U.S. state and local governments would apply to each grant. General terms and conditions for the grants would include conformance to plans, strategies, budgets, project specifications, architectural and engineering specifications, and performance standards. Specific postaward requirements address financial administration by establishing, for example, (1) improved financial reporting, accounting records, internal controls, and budget controls, (2) appropriate use of real property and equipment, and (3) competitive and well-documented procurement. The United States could withhold payments if either the FSM or the RMI fails to comply with grant terms and conditions. The amount withheld would be proportional to the breach of the term or condition. In addition, funds could be withheld if the FSM or RMI governments do not cooperate in U.S. investigations of whether Compact funds have been used for purposes other than those set forth in the amended Compacts. The new FPAs include numerous reporting requirements for the two countries. For example, each country must prepare strategic planning documents that are updated regularly, annual budgets that propose sector expenditures and performance measures, annual reports to the U.S. President regarding the use of assistance, quarterly and annual financial reports, and quarterly grant performance reports. The successful implementation of the new accountability provisions will require a sustained commitment by the three governments to fulfilling their new roles and responsibilities. Appropriate resources from the United States, the FSM, and the RMI represent one form of this commitment. While the amended Compacts do not address staffing issues, officials from Interior’s Office of Insular Affairs have informed us that they intend to post six staff in a new Honolulu office: a health grant specialist, an education grant specialist, an accountant, an economist, an auditor, and an office assistant. Interior can also contract with the Army Corps of Engineers for engineering assistance, when necessary. These Honolulu-based staff may spend about half of their time in the FSM and the RMI. Further, an Interior official noted that his office has brought one new staff member on board in Washington, D.C. and intends to post one person to work in the RMI (one staff member already works in the FSM). We have not conducted an assessment of Interior’s staffing plan and rationale and cannot comment on the adequacy of the plan or whether it represents sufficient resources in the right locations. The 30 single audit reports demonstrate a lack of or poor accountability over U.S. Compact assistance that has totaled an estimated $2.1 billion since 1987. The large number and recurring nature of the findings involving noncompliance with Compact requirements or financial management weaknesses, along with the preponderance of auditor’s qualified opinions or disclaimers of opinion on FSM and RMI financial statements, clearly indicate the need for improved FSM and RMI management of U.S. assistance and greater U.S. oversight and monitoring of the use of this assistance. Changes are needed especially considering the fact that the amended Compacts with these nations could potentially cost the U.S. government about $3.5 billion in new assistance over the next 20 years. Under the original Compact, the Department of the Interior was responsible for supervising, coordinating, and monitoring the program assistance provided. Interior officials expressed frustration with the lack of tools available to them to administer or track this assistance in a manner that could reasonably ensure that such assistance was having its intended effect. The amended Compacts strengthen reporting and monitoring measures that could improve accountability over assistance, if diligently implemented. These measures include strengthened fiscal control and accounting procedures requirements, expanded annual reporting and consultation requirements, and the ability to withhold funds for noncompliance with grant terms and conditions. The successful implementation of the new accountability provisions will require appropriate resources and sustained commitment from the United States, the FSM, and the RMI. The joint economic committees called for in the Compact with each nation and Interior’s planned increase in staff associated with Compact oversight and monitoring functions should play key roles in improving accountability over Compact funds. To help promote compliance with Compact requirements and sound financial management, the Secretary of the Interior should delegate responsibility to the Office of Insular Affairs and hold appropriate officials in that office accountable for ensuring the adequacy of staff dedicated to Compact oversight and monitoring FSM and RMI progress in addressing Compact-related single audit report findings, reporting on the FSM and RMI actions to correct Compact-related compliance and financial management findings identified in single audit reports to the Secretary of the Interior or other appropriate high-level Interior official, initiating appropriate actions when the FSM or the RMI do not undertake adequate actions to address Compact-related single audit findings in a timely manner, and investigating single audit findings that indicate possible violations of grant conditions or misuse of funds and taking appropriate actions when such problems are verified. In commenting on this report, the Office of Insular Affairs of the Department of the Interior, FSM, and RMI agreed with our findings or conclusions and recommendations. They also cited the amended Compacts as mechanisms that should result in improved financial management over Compact assistance. The FSM and RMI also provided technical comments and information on current actions to address financial management issues. We considered all comments and made changes to the report, as appropriate. The FSM comments noted that it found the report constructive and useful as it continues to prepare for the implementation of the amended Compact and its related agreements. The comments (reprinted in app. I) recognized that, although FSM has worked hard to develop a consistent approach to satisfy the Compact and FPA requirements, significant work remains to be done to improve and strengthen accountability in all aspects throughout the nation. Further, FSM agreed that it must continue to improve internal financial control through upgrading the current financial management system, providing for capacity building, and retaining its most productive and experienced employees. Finally, it noted that the amended Compact and related fiscal procedures agreement include requirements that will address all of the accountability concerns expressed in the report. RMI’s comments (reprinted in app. II) stated that it concurred with the report’s findings and noted that the report will be useful since it gives a summary of the financial and management situation of the RMI between 1996 and 2000. RMI noted that its problems stem partly from the fact that it has not had a global system for following up on audits that would apply throughout all ministries of the government as well as other entities that receive Compact grant assistance. RMI stated that it has made progress recently by upgrading its information system and strengthening its internal control procedures and noted that it will add personnel to the budget, procurement, and supply areas. In its comments (reprinted in app. III), the Office of Insular Affairs of the Department of the Interior agreed with the conclusions and recommendations in the report. The Office also noted that it looks forward to discharging its responsibilities under the amended Compacts and that it is confident that it will now have the tools needed to properly protect the American taxpayer’s investment in the freely associated states. As agreed with your offices, unless you publicly announce its contents earlier, we will not distribute this report until 30 days after its date. At that time, we will send copies to the Secretary of the Interior, the President of the Federated States of Micronesia, the President of the Republic of the Marshall Islands, and appropriate congressional committees. Copies will also be made available to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. For future contacts regarding this report, please call McCoy Williams at (202) 512-6906 or Susan S. Westin at (202) 512-4128. Staff contacts and other key contributors to this report are listed in appendix IV. In addition to the contacts named above, Perry Datwyler and Leslie Holen made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
In 1986, the United States entered into a Compact of Free Association (Compact) that provided about $2.1 billion in U.S. assistance from 1987 through 2003 to the Pacific Island nations of the Federated States of Micronesia (FSM) and the Republic of the Marshall Islands (RMI). GAO has issued a number of reports raising concerns about the effectiveness of this assistance. GAO was asked to review possible FSM and RMI misuse of Compact funds. We reviewed single audits for 1996 through 2000 and this report summarizes the audit results. GAO's review of 30 single audit reports for the FSM, 4 FSM states, and the RMI for the years 1996 through 2000 identified pervasive and persistent noncompliance with Compact requirements and financial statement-related audit findings. These single audit reports identified 458 audit findings relevant to the Compact. Significant numbers of these audit findings occurred during each year of the 5-year period and at each of the auditees. In addition, successive single audits identified recurring audit findings over the 5-year period despite corrective action plans prepared by the auditees. While none of the audit findings specifically discussed misuse of Compact funds, they did describe noncompliance with Compact requirements and financial management problems in areas that GAO considers highly susceptible to misuse, such as poor control over cash and equipment. When considered in conjunction with the qualified opinions or disclaimers of opinion on the financial statements in all 30 reports and for 60 percent of the Schedules of Expenditure of Federal Awards required by the Single Audit Act, the audit findings reveal one thing: overall poor accountability of Compact funds. In responding to GAO's previous reviews of the original Compact, Interior officials expressed concerns about the U.S. government's limited ability to enforce accountability over Compact funds due to certain provisions of the Compact and the related fiscal procedures agreement (FPA). Recently, an Interior official noted that departmental officials have been frustrated with the lack of tools to administer or track federal assistance in a manner that could reasonably ensure that such assistance is having its intended effect. GAO found that the amended Compacts and related FPAs, which are scheduled to become effective upon legislative approval in the three countries, include many strengthened reporting and monitoring measures that could improve accountability, if diligently implemented. For example, funds could be withheld for noncompliance with Compact terms and conditions. In addition, joint economic committees and an Interior oversight team will focus on monitoring and overseeing Compact funds.
Asylum, a form of humanitarian protection, is an immigration benefit that enables certain noncitizens to remain in the United States and apply for lawful permanent residence. Asylum provides refuge for certain individuals who have been persecuted in the past or fear persecution on the basis of race, religion, nationality, membership in a particular social group, or political opinion. Congress and the executive branch have acted to strengthen the U.S. asylum system against the possibility of asylum fraud and limit its vulnerability to terrorists using it as a vehicle for remaining in the United States. For example, in the mid-1990s, the Asylum Division implemented major reforms which, among other things, decoupled employment authorization from asylum requests to discourage applicants with fraudulent asylum claims from applying for asylum solely to obtain a work authorization, and established a goal of completing asylum adjudications within 180 days. To account for such circumstances, the Asylum Division established a national goal to complete 75 percent of the cases that are interviewed at local Asylum Offices and referred to the immigration courts within 60 days of the application date; EOIR established a goal that 90 percent of all asylum cases be completed within 180 days from the application date. The Illegal Immigration Reform and Immigrant Responsibility Act of 1996 generally codified some of these reforms into law and required administrative adjudication of asylum applications, not including administrative appeals, within 180 days, absent exceptional circumstances. It also incorporated certain security provisions, including a requirement that the identity of all asylum applicants be checked against certain records or databases maintained by the federal government to determine if an applicant is ineligible to apply for or be granted asylum. More recently, the REAL ID Act of 2005 codified existing DOJ precedent that (1) the burden is on the applicant to establish past persecution or a well-founded fear of persecution and (2) asylum adjudicators have the discretion to require documentary support for asylum claims. Specifically, if an adjudicator determines that the applicant should provide evidence to corroborate otherwise credible testimony, such evidence is to be provided unless the applicant does not have and cannot reasonably obtain the evidence. The act also laid out the criteria to be considered in making a credibility determination, stating that adjudicators must consider the totality of the circumstances and all relevant factors. An adjudicator may base a credibility determination on inconsistencies, inaccuracies, or falsehoods without regard to whether an inconsistency, inaccuracy, or falsehood goes to the heart of the applicant’s claim, as long as it is relevant to the evaluation in light of the totality of the circumstances. It also clarified the wording of the terrorist-related grounds of ineligibility for a grant. Responsibility for the U.S. Asylum System is shared between USCIS in DHS and EOIR in DOJ, with asylum officers and immigration judges adjudicating asylum cases as well as other types of cases. In addition to asylum cases, the Asylum Division’s and EOIR’s caseloads also include certain applications for relief under section 203 of the Nicaraguan Adjustment and Central American Relief Act (NACARA) and credible and reasonable fear cases. NACARA cases involve certain individuals from Guatemala and El Salvador and former Soviet Bloc countries who can have their removal cancelled. Credible fear cases involve individuals subject to expedited removal who express an intention to apply for asylum or state that they have a fear of persecution or torture. Reasonable fear cases involve individuals subject to administrative removal or reinstated orders of removal who have expressed a fear of persecution or torture if removed. In addition to these cases, immigration judges also hear other types of immigration cases. The Asylum Division and its eight Asylum Offices—Arlington, Chicago, Houston, Los Angeles, Miami, New York, Newark, and San Francisco— reside within USCIS. Asylum officers are assigned to these eight offices and periodically travel to other locations to conduct interviews when applicants live outside the general geographic area of these offices. In fiscal year 2008, the Asylum Division received about $61 million all from USCIS fee-based funding, although no fee is charged to apply for asylum. Within EOIR, immigration judges are positioned organizationally under the Office of the Chief Immigration Judge, which is responsible for 54 administrative immigration courts. The Board of Immigration Appeals (BIA) also resides within EOIR and is responsible for hearing appeals of immigration judges’ asylum decisions, among other kinds of appeals. In fiscal year 2008, EOIR received about $238 million to fund all of its activities. DHS’s asylum adjudication process involves affirmative asylum claims— that is, claims that are made at the initiative of the alien who is in the country either legally or illegally and filed directly with USCIS. The affirmative asylum process is nonadversarial in that no government official argues in opposition to the asylum applicant, and the asylum officer is to be a neutral decision maker. Figure 1 provides an overview of the steps typically involved in DHS’s asylum process. For more detailed information on the asylum process, see appendix V. The Asylum Division’s Affirmative Asylum Procedures Manual and basic training materials for asylum officers identify the various tasks asylum officers are to perform in adjudicating an asylum case. Asylum officers are required to conduct an asylum interview, prior to which they must, among other things, review the applicant’s file and check databases to (1) determine who is included on the application, (2) determine when the applicant claims to have entered the United States and when he or she filed the asylum application, (3) become familiar with the applicant’s background and claim, and (4) identify issues to cover during the interview. In addition, if the asylum officer is unfamiliar with country conditions relevant to the applicant’s claim, the officer should research conditions in that country. During the interview, in addition to hearing the applicant’s testimony, the asylum officer must also explain the process, verify basic and biographical information provided on his or her application, and place the applicant, the applicant’s interpreter, and the interpreter monitor under oath. After the interview, the asylum officer must update the Refugees, Asylum, and Parole System (RAPS); write a decision that includes a legal analysis and in most cases citations to country conditions; and prepare a decision letter. In making a decision, an asylum officer must make a determination of the credibility of the applicant and consider if any false submission of information is relevant to the claim. On average, asylum officers have about 4 hours to complete these tasks for each case. The 4-hour average is based on the productivity standard that requires management to assign asylum officers work equivalent to 18 asylum cases in a 2-week period and allows for 4 hours of training each week. In addition, the Asylum Division generally requires that asylum officers submit their written decisions to their supervisor within 4 days of conducting an applicant interview. Affirmative asylum applicants are almost never detained while their asylum application is pending. Applicants are free to live in the United States pending the completion of their asylum processing. DHS’s affirmative asylum process can result in asylum officers making one of the following decisions regarding the applicant and qualifying dependents: Grant of asylum. The asylum officer grants asylum when he or she determines that the applicant is eligible for asylum. The asylees can remain in the United States indefinitely unless their asylum is terminated. Asylees are eligible for certain benefits, such as an Employment Authorization Document, an unrestricted Social Security card, and medical and employment assistance. Within 2 years of being granted asylum, asylees can petition for a spouse or child who was not included in the original grant of asylum to also obtain asylum. In addition, they may also apply for lawful permanent residency 1 year after being granted asylum and, ultimately, United States citizenship. Recommended approval of asylum. The asylum officer issues a recommended approval of asylum when he or she determines that the applicant is eligible for asylum, but USCIS has not received the results of a mandatory FBI name check. The decision to change a recommended approval to an asylum grant is contingent on a favorable result from background, identity, and security checks (referred to throughout this report as identity and security checks). An applicant who receives a recommended approval may apply for an Employment Authorization Document, but not for other benefits. Referral to immigration court. The asylum officer makes a referral to the immigration court when the applicant is in the United States illegally and the officer determines that the applicant is ineligible for asylum. The asylum officer prepares a Notice to Appear before an immigration judge. A referral is not a denial of asylum; rather the applicant and any of the applicant’s dependents also in the United States illegally are placed in removal proceedings where an immigration judge reviews the asylum case de novo. Denial of asylum. The asylum officer denies asylum when the applicant is in the United States legally and the officer determines that the applicant is ineligible for asylum. The asylum officer prepares a Notice of Intent to Deny, and the applicant is given 16 days to rebut the finding. If the applicant submits a rebuttal, the asylum officer reviews it and then approves or denies the claim. If the applicant does not rebut the finding or the rebuttal fails to overcome the grounds for denial, the applicant is denied asylum but may stay in the United States as long as the applicant remains in legal status. Not all cases result in an asylum decision. For example, USCIS administratively closes a case if the applicant withdraws his or her asylum application. From fiscal years 2002 through 2007, the asylum grant rate for affirmative asylum applications ranged from 30 percent to 36 percent for asylum cases that resulted in a decision. From fiscal years 2002 through 2007, the Asylum Division received about 400,000 new or re-opened asylum, NACARA, and credible and reasonable fear cases and completed approximately 750,000 cases. During this same period, authorized staffing levels for asylum officers ranged from a high of 332 officers in 2004 to a low of 291 officers in 2007. See appendix VI for more information on the Asylum Division’s caseload and staffing levels. In contrast to DHS’s process, DOJ’s asylum adjudication process is adversarial in that individuals appear in removal proceedings before EOIR immigration judges to defend themselves against removal from the United States. Immigration judges hear both affirmative asylum claims that have been referred to them by an asylum officer as well as defensive asylum claims. A defensive claim is made by an alien who first requests asylum while in removal proceedings. An alien making a defensive claim may have been placed in removal proceedings after having been stopped at the border without proper documentation, identified as present in the United States illegally, or identified as deportable on one or more grounds, such as certain kinds of criminal convictions. Applicants who filed for asylum affirmatively with USCIS, but were referred to an immigration court and placed in removal proceedings, continue to be considered “affirmative” asylum applicants. Affirmative and defensive claims follow the same procedures in removal proceedings. During immigration court proceedings, immigration judges hear witness testimony and cross-examinations and review evidence. ICE Assistant Chief Counsels, also known as ICE trial attorneys, represent DHS in these proceedings. ICE trial attorneys are also responsible for ensuring identity and security checks are completed. An applicant in immigration proceedings may be represented by an attorney of his or her choosing at no cost to the government. Figure 2 provides an overview of the steps typically involved in EOIR’s asylum process. For more detailed information on the asylum process, see appendix V. (within 180 day) EOIR’s asylum adjudication process can result in immigration judges making one of the following decisions regarding the applicant and qualifying dependents: Grant of asylum. The immigration judge grants asylum when he or she determines that the applicant is eligible for asylum. The asylees can remain in the United States indefinitely, unless DOJ terminates asylum. A grant of asylum from an immigration judge confers the same benefits on an asylee as a grant of asylum from an asylum officer, which are discussed earlier in this report. Denial of asylum. The immigration judge denies asylum when he or she determines that the applicant is ineligible for asylum and may order the applicant to be removed from the United States unless the immigration judge grants the applicant another form of relief from removal. EOIR may also close a case without making a decision for such reasons as a request to move a case from one court to another or the applicant withdrawing or abandoning his or her application for asylum. From fiscal years 2002 through 2005, the asylum grant rate in the immigration courts remained fairly consistent at around 37 percent, and increased to 45 percent in 2006 and 46 percent in 2007. From fiscal years 2002 through 2007, the immigration courts received about 1.9 million newly filed or reopened immigration cases, and completed about the same number of cases. During this same time, the number of authorized immigration judges increased from 216 in fiscal year 2002 to 251 in fiscal year 2007. See appendix VI for more detailed information on EOIR’s caseload and staffing. To help ensure quality in adjudications, the Asylum Division has designed training programs and quality reviews and EOIR has designed training programs, but some can be improved. The Asylum Division has designed a framework for training asylum officers and their supervisors. However, despite general satisfaction with the initial training that officers receive, many asylum officers and supervisors agreed that asylum officers needed additional training in a number of areas—such as identifying fraud, conducting identity and security checks, and assessing credibility—to improve their ability to carry out their responsibilities. Also, 88 percent of asylum officers expressed the view that observing skilled interviewers would help improve their interviewing skills, yet 53 percent of asylum officers reported they had not had the opportunity to do so. Furthermore, the Asylum Division does not have a framework in place to solicit asylum officers’ or supervisors’ views on training needs in a structured and consistent manner. The Asylum Division has designed a framework for quality reviews, including those conducted by supervisors and other local and headquarters personnel. Although supervisors review all asylum officer decisions and headquarters personnel review certain cases, other quality reviews had not occurred in three of the eight Asylum Offices. With respect to EOIR, although the majority of immigration judges reported the training they received enhanced their ability to adjudicate asylum cases, the majority reported needing additional training in several areas, including identifying fraud. EOIR expanded its training program primarily for new immigration judges in 2006 and annually solicits input from immigration judges on their training needs. According to EOIR, BIA reviews of appealed cases provide another means of quality assurance. The Asylum Division provides training to asylum officers and supervisory asylum officers (i.e., centralized training) and directs local Asylum Offices to provide weekly training (i.e., decentralized training), but most asylum officers and supervisors who responded to our survey reported that better or more training was needed, particularly in the area of fraud, to improve asylum officers’ ability to adjudicate asylum cases. The mix of centralized and decentralized training that officers receive reflects elements of a strategic approach to training that we have described in previous work. Centralized training consists of a 5-week Asylum Officer Basic Training Course (AOBTC) that addresses most facets of the asylum adjudication process and is usually offered about twice each year at the Federal Law Enforcement Training Center (FLETC) to recently hired asylum officers. To adjudicate asylum claims, asylum officers must either complete this training program or be certified at their respective Asylum Office. In addition, a 5-week basic training course on immigration law for USCIS adjudications officers and asylum officers is generally provided at FLETC or another USCIS facility within an asylum officer’s first year. The Asylum Division also periodically offers a 2-week supervisory training program in Washington, D.C., for supervisory asylum officers and Asylum Office Directors and Deputy Directors that concentrates primarily on advanced asylum law issues and the review of asylum officer decisions. Additionally, USCIS said that the Asylum Division created a new management position—a Chief of Training—to take actions to improve the training of asylum officers. The first Chief of Training came on board on August 31, 2008. Most asylum officers and supervisory asylum officers who responded to our survey reported that the centralized training they received generally prepared them for their roles. Specifically, 75 percent of asylum officers reported that AOBTC prepared them moderately or very well to adjudicate asylum cases, a positive view of AOBTC that held regardless of their length of experience as asylum officers. However, more than 75 percent of asylum officers also believed that AOBTC needed improvement to better prepare them to identify possible fraud and conduct identity and security checks. Among the 26 supervisory asylum officers who had attended the supervisory training program prior to completing our survey, 14 said that, overall, the training prepared them moderately or very well to review asylum officer decisions. Nevertheless, 13 of those who had attended the training reported that the supervisory training program needed to be improved to help them better provide feedback on asylum officer written decisions, while 12 reported improvements were needed to help them better understand and contribute to the Asylum Division’s efforts to combat asylum fraud and 12 reported improvements were needed to help them better analyze credibility. In addition to the centralized training programs, the Asylum Division requires that asylum officers and supervisory asylum officers participate in ongoing decentralized training at local Asylum Offices, where local Quality Assurance and Training Coordinators (QA/T) are responsible for developing training to meet the needs of each office. Asylum Offices are to allocate 4 hours each week for formal or informal training to asylum officers and their supervisors. The training can range from classroom instruction by the local QA/T to individual study time that officers can use for such learning activities as staying current with case law, researching conditions around the world affecting asylees and refugees, and reading new procedures issued by headquarters. Although QA/Ts, in consultation with local office management, have significant discretion in deciding what training to provide, officials from the Training, Research, and Quality Branch said they review each office’s quarterly training reports to ascertain what training has been provided and is planned. In addition, the Asylum Division provides training materials to all the offices to ensure a national, consistent training approach when warranted. For example, after the REAL ID Act was passed, the Asylum Division distributed explanatory PowerPoint presentations and descriptions of the statutory changes. Notwithstanding the training asylum officers and supervisors receive, most identified areas in which they said they needed additional training on fraud-related topics as well as other areas. As figure 3 shows, at least 75 percent of asylum officers or 75 percent of supervisory asylum officers who responded to our survey identified 15 specific topics (from about 25 training topics about which we inquired) as areas in which asylum officers needed more training to improve their ability to adjudicate asylum claims. Furthermore, supervisory asylum officers consistently viewed asylum officers’ need for additional training as greater than asylum officers’ perceptions of their own needs. We do not know why, in some instances, there were sizable differences between asylum officers’ and supervisors’ views of asylum officers’ training needs. However, supervisors have a broader perspective since multiple asylum officers report to them. The training topics identified were in areas related to fraud, identity and security checks, interviewing and assessing credibility, relevant statutes, time management, and the Asylum Virtual Library—the Asylum Division’s online library. Fraud-detection training. USCIS’s Strategic Plan calls for training adjudications staff, which includes asylum officers, to proactively identify fraud when considering applications for immigration benefits. However, 77 percent of asylum officers who responded to our survey reported that AOBTC’s training needed to be improved with respect to identifying possible fraud. Asylum Division officials told us that despite a number of improvements, they also saw fraud-detection training as one of the areas that requires further refinement in the AOBTC curriculum but had not updated AOBTC’s written lesson plan on fraud since 2002 because (1) USCIS’s fraud-prevention program had been evolving since its creation of its Office of Fraud Detection and National Security (FDNS) in 2004 and placement of FDNS immigration officers (FDNS-IO) at Asylum Offices and (2) revising other lesson plans, such as those related to national security issues, had taken priority. However, they explained that despite not having updated the written lesson plan, fraud-prevention training at AOBTC had undergone significant changes since 2002 and continues to undergo change. For example, since 2002, the Asylum Division added training sessions at AOBTC on the role of FDNS-IOs in Asylum Offices, a workshop on how to make fraud referrals, a “hands on” session on identifying features of fraudulent documents, and a fraud-prevention computer lab session emphasizing the electronic resources available to asylum officers. According to Asylum Division officials, since 2006, AOBTC has dedicated 6 or more hours to fraud-detection training. Although the Asylum Division has made changes to improve fraud-detection training at AOBTC, our 2007 survey found that 70 percent (19 of 27) of new asylum officer respondents (who would have attended AOBTC since 2006) reported that fraud detection-training at AOBTC needed to be improved to better prepare them to identify fraud. Those with longer tenure (who would have attended AOBTC prior to 2006) also held this view. Eighty-one percent (79 of 98) of respondents who attended AOBTC and had been asylum officers for at least 1 year but less than 10 years, and 94 percent (30 of 32) of those who attended AOBTC and had been asylum officers for 10 years or more, reported that AOBTC fraud-detection training needed improvement. Asylum officers’ views that more fraud-related training was needed to improve their ability to adjudicate asylum claims extended beyond the AOBTC curriculum to the ongoing training sessions that take place at their respective Asylum Offices. Overall, at least 75 percent of both asylum officer and supervisory asylum officer survey respondents reported that asylum officers needed additional training on identifying fraudulent documents, current trends in fraud, and identifying fraud in the claim. In addition, the majority of asylum officers and supervisors reported, overall, that asylum officers also needed additional training on identifying preparer fraud (i.e., fraud perpetrated by an individual who prepared the applicant’s application), attorney fraud (i.e., fraud perpetrated by an attorney representing the applicant), and interpreter fraud (i.e., fraud perpetrated by an individual who translates during the applicant’s interview). These results were generally consistent across Asylum Offices—that is, at least half of the supervisors in all eight Asylum Offices reported that asylum officers needed additional training in these fraud-related topics, and the majority of asylum officers reported this need in seven of the eight offices. According to Asylum Division officials, since 2006, the Asylum Division has required that each Asylum Office have at least two of their staff trained by ICE’s Forensic Document Laboratory (FDL). These staff attend a 3-day training that focuses on methods for detecting fraudulent features in documents as well as security features in genuine documents. Staff who have completed this training are responsible for training other asylum officers and serving as in-house resources on document issues. According to Asylum Division officials, as of June 2008, the Asylum Division had 34 staff who received this FDL training. Identity and security checks. Checking asylum applicants’ identity against appropriate records and databases is required by asylum law and Asylum Division procedures and is a tool to help determine whether certain applicants may be ineligible for asylum protection. The results of required checks may uncover, for example, that an applicant may be barred due to national security concerns. Despite their importance to the integrity of the asylum process, 63 percent of asylum officers and 85 percent of supervisory asylum officers (34 of 40) who responded to our survey reported that asylum officers moderately or greatly needed additional training on conducting identity and security checks. AOBTC provides a general exposure to the topic of databases and identity and security checks, and, according to 76 percent of asylum officer respondents, this was an area that needed to be improved at AOBTC to better prepare them to make asylum decisions. The latest AOBTC Validation Study, which was completed in 2003, noted that completing identity and security checks was among the critical tasks asylum officers perform and recommended that asylum officers receive basic orientation to this task at AOBTC, followed by on-the-job training at their local Asylum Office. Officials from the Asylum Division’s Training, Research, and Quality Branch explained that new asylum officers should receive local training on databases because the databases asylum officers are required to check are not all accessible at AOBTC. Further, the expertise for teaching how to conduct and interpret the results of identity and security checks resides among staff at local Asylum Offices. Most asylum officers who responded to our survey reported that they understood the information in the databases they check, yet the majority of asylum officer and supervisor respondents said that asylum officers still needed more training on identity and security checks. Eighty-eight percent of asylum officer respondents reported that they moderately or greatly understood the type of information contained in the various databases or systems they check as well as the results they receive. Nevertheless, 68 percent of asylum officers thought additional training was moderately or greatly needed at their local offices on interpreting the results of identity and security checks—a view held by 88 percent of supervisors (35 of 40). Sixty-three percent of asylum officer respondents thought more training was moderately or greatly needed locally on conducting these checks—a view held by 85 percent of supervisors (34 of 40). This view was fairly consistent across all eight Asylum Offices—that is, at least half of the supervisors reported this training need for asylum officers in all eight Asylum Offices, while the majority of asylum officers reported this need in seven of the eight offices. In providing survey comments, one asylum officer expressed the opinion that training is needed on how to read the results from all the identity and security checks because, although officers do these checks as required, they do not know what to do if they get a hit that indicates that the applicant may be a national security or public safety threat. In an effort to keep asylum officers informed of policies and procedures for conducting checks, the Asylum Division issued an Identity and Security Checks Procedures Manual in 2005 and has updated it three times between March 2007 and May 2008. Interviewing and assessing credibility. Assessing credibility involves a determination of whether all of the evidence indicates that the applicant’s testimony is credible. An asylum officer may find the applicant to be credible if he or she determines, upon considering the totality of the circumstances and all relevant factors, that an applicant’s testimony is consistent, detailed, and plausible. The ability to elicit information through applicant interviews is a critical component of an asylum officer’s ability to distinguish between genuine and fraudulent claims. Internal control standards for the federal government state that agencies should, among other things, ensure that management identifies skills personnel need to perform jobs and provide the needed training to staff. Responses from asylum officers and supervisors to several survey questions pointed to the need for additional training or learning opportunities to improve asylum officers’ interviewing skills, including assessing credibility. Sixty-four percent of asylum officers who responded to our survey reported a moderate or great need for more training on assessing credibility. To a much greater extent, supervisors, who are required to observe one interview each month conducted by each asylum officer they supervise, thought that asylum officers needed more interview-related training. Furthermore, 95 percent of supervisor respondents (38 of 40) reported that asylum officers moderately or greatly needed additional training in assessing credibility—more than in any of the 27 training areas about which we inquired—to improve their ability to adjudicate asylum claims. Eighty-eight percent of supervisors (35 of 40) reported that asylum officers needed more training in overall interviewing skills and 83 percent of supervisors (33 of 40) thought asylum officers needed more training on eliciting sufficient information during interviews. According to a supervisory asylum officer with local anti-fraud responsibilities, interviewing is the only realistic basis for fraud deterrence and new officers should be informed from the start that if they cannot develop sophisticated fraud-sensitive interviewing skills, they will not be able to make meaningful adjudications. According to 88 percent of asylum officers who responded to our survey, having the opportunity to observe interviews conducted by skilled interviewers would be a moderate to very useful way to improve officers’ interviewing skills, yet 53 percent reported not having had the opportunity to do so. Of those asylum officers who said observing skilled interviewers would be useful, 98 percent reported this would be moderately or very useful during their first year on the job and many thought this would be of value beyond their first year, as shown in table 1. Standards for internal control in the federal government state that federal agencies should ensure that management provides needed training to staff. Providing additional opportunities to observe skilled interviewers would help asylum officers refine their interview techniques to elicit information to use in assessing credibility, determining eligibility, and distinguishing between genuine and fraudulent claims. Relevant statutes (the REAL ID Act and U.S. asylum law). Given that writing a legal analysis is part of every asylum decision they make, asylum officers must know how to read and interpret precedent decisions and stay current with case law. The REAL ID Act of 2005 made changes that apply to asylum adjudications of applications filed on or after May 11, 2005. In addition to developing AOBTC lesson plans, the Asylum Division developed training materials on the REAL ID Act and required that all Asylum Offices provide local training no later than May 30, 2006. Nevertheless, when we conducted our surveys in March through May 2007, 68 percent of asylum officers and 90 percent of supervisory asylum officers (36 of 40) who responded reported that asylum officers had a moderate or great need for additional training on how to apply the REAL ID Act in adjudicating asylum decisions. Further, 44 percent of asylum officers and 78 percent of supervisory asylum officers (31 of 40) who responded to our survey indicated that asylum officers had a moderate or great need for additional training on U.S. asylum law, in general—for example, on case law and statutory and regulatory changes. Time management. Fifty-seven percent of asylum officer respondents and 85 percent of supervisory asylum officer respondents (34 of 40) reported that asylum officers had a moderate or great need for additional training on time management. An Asylum Office Deputy Director explained that without good time management skills, asylum officers can easily fall behind on their workload and it can be impossible to catch up. Later in this report we will discuss, in depth, how time constraints challenge asylum officers and how this affects their adjudications. Using the Asylum Virtual Library. The Asylum Virtual Library is an online collection of documents produced and collected by the Asylum Division and Asylum Offices. Documents in the online library are organized into folders and include case law, country-conditions information, decision-writing templates, forms, policies and procedures, statistics, and training materials. Asylum Office personnel can find information by browsing through the folders or by conducting searches. Forty-four percent of asylum officers and 75 percent of supervisors (30 of 40) who responded to our survey reported that asylum officers had a moderate or great need for additional training on using the Asylum Virtual Library. Although our surveys of asylum officers and supervisors revealed some widely held views about asylum officers’ and supervisors’ training needs, the Asylum Division does not have a framework in place for soliciting asylum officers’ and supervisors’ views on their training needs in a structured and consistent manner. Obtaining information in this manner would improve headquarters’ and Asylum Offices’ knowledge of asylum officers’ and supervisory asylum officers’ ongoing training needs and the ability to use training to meet those needs. We have previously reported that a best human-capital practice among effective organizations is to survey or interview agency employees to obtain their views on training programs that might be needed and systematically consider and act on employees’ suggestions for improving or developing training, when appropriate. The Asylum Division requests general written feedback from new asylum officers on the training they received at AOBTC. In addition, all eight Asylum Office Directors stated that their offices had used ad hoc methods to obtain input from officers on unmet training needs, such as asking for feedback at training sessions or periodically e-mailing officers for training suggestions. However, these methods varied among offices and were not done in a consistent manner using a structured approach to collect the information. Nevertheless, 63 percent of the asylum officers who responded to our survey said their Asylum Office had not solicited their views on what training should be offered locally. Responses varied by Asylum Office. In four of the eight offices, between half and three-fourths of the asylum officers said their views had been solicited; but in the remaining four offices, no more than 15 percent said their views had been solicited. Training, Research, and Quality Branch officials said that, at the national level, they rely on Asylum Office Directors for feedback on whether they are meeting officers’ needs and also request information on asylum officer training needs and activities during monthly conference calls with local QA/Ts. The Asylum Division has designed a three-tiered framework for conducting quality reviews in Asylum Offices and headquarters to help ensure the quality and consistency of asylum decisions, but local quality assurance reviews—one of the three tiers—do not always occur. According to the Asylum Division, a “particularly high level” of quality is demanded in asylum decisions to protect the integrity of the legal immigration process and to avoid potentially serious consequences that could result if an applicant is harmed after being wrongfully returned to his or her home country or if an applicant who poses a threat to the United States is wrongfully permitted to stay. As such, it has designed a quality review framework that includes supervisory review, local QA/T review, and headquarters quality review. Together, these reviews are intended, among other things, to help ensure quality and consistency as well as identify deficiencies that might be addressed through training. This framework is in keeping with internal control standards for the federal government, which state that agencies should assure that ongoing monitoring occurs in the course of normal operations. Supervisory review. The Asylum Division requires a supervisory asylum officer to sign off on every asylum decision to indicate that it is supported by law and that procedures are properly followed. Supervisory review is intended to assure quality and provide consistency in decision making— not to ensure the supervisor agrees with the specific decision the asylum officer reached. Asylum Office Directors generally agreed that the 100 percent supervisory review requirement was important for reasons that included the complexity of asylum adjudications. One Asylum Office Director characterized the requirement as the key to the Asylum Division’s success. The majority of the 171 asylum officers and 40 supervisory asylum officers who responded to our survey considered these reviews as moderately or very effective in ensuring compliance with procedures (72 percent and 90 percent, respectively) and improving the quality of decisions (53 percent and 83 percent, respectively). Further, 37 percent of asylum officers and 73 percent of supervisors reported that the supervisory review promoted consistency in decision making. Local quality-assurance review. The Asylum Division created the QA/T position in each Asylum Office to, among other things, review a sample of asylum decisions for quality and consistency and observe asylum officer interviews to assess interview techniques. According to the Chief of the Training, Research, and Quality Branch, QA/T reviews are similar to supervisory reviews in that both assess the legal sufficiency of decisions. In addition, QA/Ts are in a position to take a broader view by looking for consistency across each Asylum Office and identifying office training needs that may surface from their quality reviews. The 1997 QA/T position description outlined quality assurance review responsibilities in addition to training responsibilities as including (1) reviewing a representative sample of written decisions for quality, consistency, and timeliness with an emphasis on identifying patterns of inconsistency, faulty legal analysis, trends, misuse of country-conditions information, or procedural or technical errors; (2) reviewing sensitive cases before they are forwarded to Asylum Division headquarters for review (see the following section on headquarters quality review for more on these sensitive cases); and (3) observing interviews conducted by asylum officers to identify strengths and weaknesses in interviewing techniques. It further stated that QA/T responsibilities included developing weekly training sessions that address, among other things, the interviewing and adjudication deficiencies noted during quality assurance reviews. Thus, by having responsibility for reviewing asylum officers’ decisions and observing interviews, as well as providing training, it was intended that the QA/T would be in a position to ensure any observed deficiencies could be addressed through training and brought to local management’s attention. In 1998, the Asylum Division also communicated that quality assurance responsibilities should occupy the greatest portion of a QA/Ts’ time, in contrast to the time they are to spend on their other responsibilities, including training. It further communicated that a proper role for the QA/T is to help the Asylum Office Director monitor the quality assurance of each supervisory asylum officer. According to the Chief of the Training, Research, and Quality Branch, QA/Ts are to look for ways to improve the quality of each office’s adjudications and identify training needs. Furthermore, the Chief stated that each office is to develop its own QA/T performance work plan and decide how many cases a QA/T should review for quality, including how to select cases for review. As a result of the discretion given to Asylum Offices, expectations for QA/T reviews of a sample of decisions varied among Asylum Offices. QA/T performance work plans either included the expectation that QA/Ts evaluate the quality and consistency of written decisions, or note procedural or technical errors and deficiencies. QA/T reviews of a sample of decisions were routinely conducted in five of the eight offices, according to Asylum Office Directors and QA/Ts. For example, we were told that in one office, QA/Ts randomly reviewed 2 to 3 decisions each week; in another office, the QA/T randomly reviewed 12 cases each week; and in a third office, the QA/T reviewed all decisions that had been reviewed by a supervisor on 1 day every 2 weeks, focusing on a particular aspect of the decision. According to two QA/Ts, the deficiencies they most frequently identified involved asylum officers’ credibility assessments. For example, one explained that asylum officers were not being sufficiently rigorous in pursuing applicants’ credibility. Although both QA/Ts said their reviews usually revealed minor deficiencies, one noted that egregious problems did occasionally surface, such as a decision that is not consistent with the latest case law. In three offices QA/T reviews were not being done routinely. In two of these offices, competing work demands reportedly precluded QA/Ts from performing quality reviews or limited their frequency. For example, in one office, the QA/T was expected to evaluate an average of five or more decisions each week for quality and consistency to receive the highest rating in the related performance element. However, the Director of that office indicated the QA/T had been unable to meet that expectation because of the time demands of other responsibilities involving reviewing sensitive cases and training-related tasks. The Asylum Office Director of the third office told us in August 2007 of plans for that office to establish a process for the QA/T to randomly review two to four decisions each week. Although this Director informed us in August 2008 that the QA/T had begun conducting these reviews, the Director stated the position had become vacant during the past year and thus local quality assurance reviews were not taking place. In addition, in six of the eight offices, including two that we visited, performance work plans included local expectations that QA/Ts observe and evaluate asylum officer interviews. However, in all three offices we visited, the QA/Ts told us they did not observe or evaluate asylum officer interviews, or did so only on occasion, because of other work demands. Therefore, the Asylum Division’s quality review framework was not being fully implemented in all the offices. Implementing the quality review framework would better position the Asylum Division to examine the root causes of deficiencies and take corrective action, such as addressing deficiencies through training. A recent study of asylum adjudications by Georgetown University researchers found that there was considerable variability among individual adjudicators in Asylum Offices as well as in the immigration courts and the U.S. Courts of Appeals. Asylum Office Directors generally agreed that grant rates can legitimately vary among officers in one office as well as across offices. Nevertheless, four either expressed concern about inconsistencies that result from differences among some asylum officers or supervisory asylum officers in their office or said that headquarters should be doing more to improve the consistency of asylum decisions or review the quality of work. According to the Asylum Division’s Deputy Chief, headquarters does not monitor asylum officers’ grant rates or decision patterns, which would provide some information regarding consistency in asylum decisions, in part because it does not want to suggest that there is a particular grant rate that is correct or desirable. Further, we asked six Asylum Office Directors if they monitored the grant rates of asylum officers in their offices and five said they did not. The Asylum Division is considering developing a training course for QA/Ts and creating a new senior asylum officer position that, among other things, would have the authority to assess the consistency of supervisory decisions. Headquarters quality review. To help ensure consistency in novel or complex areas of the law, the Asylum Division reviews all asylum cases categorized as “sensitive” before an asylum decision is issued. Local Asylum Offices are required to submit certain categories, established by headquarters, of sensitive cases to the Asylum Division’s Training, Research, and Quality Branch in headquarters for quality review. Sensitive cases include, for example, those involving issues related to national security, applicants involved in persecutor or human rights violations, diplomats or other high-level government or military officials or family members, and principal applicants who are under 18 years of age. In fiscal year 2007, Asylum Offices sent the Asylum Division 384 of these sensitive asylum cases for review, of which 28 percent were designated as national security cases. National security cases include cases in which the applicant may be a persecutor, terrorist, or risk to the security of the United States. In the previous fiscal year, Asylum Offices sent 263 such cases to the Asylum Division for review, of which 41 percent were considered national security cases. Asylum Division data indicate that, in fiscal year 2007, headquarters concurred with 86 percent of the decisions asylum officers made and, in fiscal year 2006, it concurred with 83 percent of asylum officers’ decisions. The data further indicate that when the Asylum Division did not concur, it generally required that the applicant be interviewed again or the written decision be modified, and that the decision then be resubmitted to headquarters for further review. Although the majority of immigration judges reported that EOIR’s training for newly hired immigration judges and annual training enhanced their ability to adjudicate asylum cases, the majority reported needing additional training in several areas, including identifying fraud. EOIR expanded its training program primarily for new immigration judges in 2006 and annually solicits input from immigration judges on their training needs. Since 1997, EOIR has sent newly hired immigration judges to a week-long training at the National Judicial College, which includes courses on immigration court procedures, immigration law, ethics, caseload management, and stress management. The training is delivered in a workshop format and incorporates lecture instruction, small-group exercises, and court-hearing demonstrations. Of the 67 immigration judges who came on board since 1997 and responded to our question about this training, 66 percent reported that the National Judicial College moderately or greatly enhanced their ability to adjudicate asylum cases. In addition to the new immigration judge training program, EOIR also holds an annual conference for all immigration judges. This conference is generally a week-long training that includes lectures and presentations. During the 2007 conference, topics covered included immigration law and procedure, ethics, religious freedom, disparities in asylum adjudications and forensic analysis. Eighty percent of immigration judges who responded to our survey reported that attending the annual conference in person either moderately or greatly enhanced their ability to adjudicate asylum cases. Although immigration judges generally attend this conference in person, the in-person conference was canceled in fiscal years 2003 through 2005 and again in 2008 because of budget constraints. In its place, a virtual conference was held in fiscal years 2004 and 2005, and included recorded presentations in place of the in-person conference. EOIR officials told us that because of budget constraints, a virtual conference was also offered in August 2008. The virtual conference included interactive computer-based training addressing asylum issues before the courts and a multimedia presentation that emphasizes the importance and effect of immigration judge asylum decisions. Fifteen percent of immigration judges reported that a virtual conference moderately or greatly enhanced their ability to adjudicate asylum cases. Although the majority of immigration judges who responded to our survey reported that EOIR’s new hire training and in-person annual conference enhanced their ability to adjudicate asylum cases, as shown in figure 4, the majority of immigration judges also reported needing additional training in certain areas. Seventy-six percent reported moderately or greatly needing additional continuing education on asylum issues, 74 percent reported that additional training on identifying fraud was moderately or greatly needed, 59 percent reported additional training on assessing credibility was moderately or greatly needed, and 55 percent reported that additional training on U.S. asylum law was moderately or greatly needed to enhance their ability to adjudicate asylum cases. National Association of Immigration Judges (NAIJ) representatives we interviewed, who also serve as immigration judges, stated that immigration judges could benefit if time were allotted each week for self-study and they received more training on assessing credibility. Beyond these specific training topics, immigration judges who responded to our survey also identified the need for other professional development opportunities to enhance their ability to adjudicate asylum cases. For example, 75 percent of immigration judges reported that informal meetings with other immigration judges were moderately or greatly needed. NAIJ representatives stated that EOIR’s training program lacks opportunities for immigration judges to meet and communicate with other immigration judges in their same circuit and that a circuit-specific regional conference, offered quarterly, would address this need. Other forms of training that many immigration judge survey respondents thought were moderately or greatly needed included opportunities to be detailed to the BIA (62 percent) and attending intergovernmental agency conferences (55 percent). With respect to opportunities for immigration judges to be detailed to the BIA, according to EOIR, two immigration judges were serving on the BIA as of August 2007. EOIR implemented several changes to its training program in response to reforms directed by the Attorney General in 2006 and in alignment with EOIR’s 2005–2010 Strategic Plan to prioritize training for EOIR adjudicators, which was issued in 2004. In September 2006, EOIR expanded its training for newly hired immigration judges by requiring an additional week of courses. It also extended the time newly hired immigration judges observed hearings from 1 week to 4 weeks. These changes were implemented after most of the immigration judges in our survey population would have received new hire training. Sixty-three percent of those responding to our survey reported that observing hearings conducted by other immigration judges moderately or greatly enhanced their ability to adjudicate asylum cases. EOIR also recognizes that new developments in immigration law necessitate that immigration judges receive timely, current, circuit-specific legal updates. As such, it distributes case summaries on a weekly basis and, in response to reforms directed by the Attorney General in 2006, launched a monthly publication that provides a more in-depth analysis of legal issues, case law, and statutory and regulatory developments. According to EOIR, although it has neither the time nor the funds to expand immigration judges’ opportunities to interact with each other outside their court locations, it thinks it would be of immense value. Lacking the opportunity in fiscal year 2008 for such interactions through an immigration judge conference and having received feedback that immigration judges would like to observe another immigration judge, EOIR is structuring opportunities for peer court observation to take place within each immigration judge’s court. EOIR’s 2005–2010 Strategic Plan calls for EOIR to annually identify training needs for all immigration court staff and, as such, EOIR uses a structured questionnaire to solicit immigration judges’ training needs at both the new immigration judge training and the immigration judges’ annual training conference. According to EOIR, it receives continuous training recommendations from immigration judges in the field, NAIJ, the Immigration Judges Advisory Committee, and new immigration judges’ training faculty. Further, Assistant Chief Immigration Judges may observe immigration court proceedings to determine whether immigration judges need additional training. However, according to EOIR, budget cuts resulted in reductions in spending in fiscal year 2008, including canceling planned conferences and curtailing training. According to EOIR, another means of providing quality assurance resides in its formal appeals board, the BIA. The BIA is the highest administrative body for immigration issues and is responsible for applying immigration and nationality laws uniformly throughout the United States. Unlike asylum officers’ decisions, all of which are reviewed by supervisors, the BIA reviews decisions when DHS or the asylum applicant appeals a decision. In addition, according to EOIR, when a decision is appealed to the BIA, a transcript of the decision is sent to the immigration judge’s Assistant Chief Immigration Judge who may review any or all of the transcribed decisions for quality assurance. Overall, 10 percent of all immigration judges’ decisions, which include asylum decisions, were appealed to the BIA in fiscal year 2007. Asylum officers reported difficulties in assessing the authenticity of asylum claims, despite mechanisms USCIS designed to help asylum officers assess claims, and also reported time constraints in adjudicating cases. Mechanisms USCIS designed included, for example, identity and security checks and fraud prevention teams. Federal entities outside USCIS’s Asylum Division and FDNS also have a role in combating fraud and confirming the validity of claims, but their ability to provide assistance to asylum officers has been hindered due to a lack of resources, competing priorities and, in some cases, confidentiality requirements intended to protect asylum applicants and their families. In addition, asylum officers and supervisors reported that asylum officers do not have sufficient time to adjudicate cases in a manner consistent with their procedures manual and training, although management’s views were mixed. Furthermore, the Asylum Division established the productivity standard for asylum officers without the benefit of empirical data. Asylum officers face challenges in assessing the authenticity of claims— that is, identifying fraud and assessing credibility. The very nature of the asylum system, which does not require applicants to submit documentation to support their claim, presents a challenge. Furthermore, economic incentives and benefits that accompany asylum can make the system a target of fraud. Abuse of the asylum system by particular groups has been reported in the past. For example, in 1999, a large-scale federal investigation began that resulted in the prosecution and conviction in 2005 of 23 individuals, including immigration brokers and consultants who aided thousands of Indonesian immigrants living in the United States in fraudulently applying for government benefits, including asylum. In a 2006 document prepared by FDNS staff at a USCIS service center, a particular nationality was identified as using fraudulent documents for the purpose of applying for asylum and establishing residency in the United States. According to the report, “the issues identified . . . constitute widespread abuse of the asylum system” by applicants from that country. In responding to our survey, many asylum officers reported difficulties carrying out their fraud-related responsibilities and, as discussed earlier in this report, many reported needing more or better fraud-related training. According to the Asylum Division, asylum officers must consider whether fraud exists in the applicant’s claim, identity documents, or other documents provided to support the claim. Asylum applicants’ use of a fraudulent document does not necessarily constitute fraud in their overall claim. Even after identifying fraud, asylum officers must determine if the fraud was made knowingly to obtain an immigration benefit for which the applicant was not entitled. As shown in figure 5, asylum officers who responded to our survey most frequently (73 percent) identified document fraud as moderately or very difficult for them to identify. In September 2007, we reported that the ease with which individuals can obtain genuine identity documents for an assumed identity creates a vulnerability that terrorists can exploit to enter the United States with legal status. The figure also shows that about half of asylum officer survey respondents identified attorney fraud (53 percent) and identity fraud (49 percent) as moderately or very difficult for them to identify. According to Asylum Division data for fiscal year 2007, 38 percent of the applicants brought an attorney with them to their asylum interview; in fiscal year 2006, 30 percent did so. An unscrupulous attorney might prepare false asylum applications, including supporting affidavits and documents, which may be difficult to identify. For example, in February 2008, an attorney was indicted for preparing fraudulent asylum applications that included false documents with forged notary stamps and signatures. With respect to identity fraud, asylum officers we interviewed explained that the identity of an applicant is sometimes hard to determine and applicants may falsely claim to be from one country where persecution is known to occur, yet really be from another country in that region. In addition to identifying fraud, assessing credibility poses a challenge to asylum officers in making asylum decisions. As previously discussed in this report, asylum officers must make a credibility assessment regarding every applicant they interview. As shown in figure 6, the majority of asylum officers who responded to our survey reported significant challenges to assessing credibility in about half or more of the cases they adjudicated in the past year, including insufficient time to prepare and conduct research prior to the interview (73 percent), insufficient time to conduct the interview (63 percent), the lack of information regarding document validity (61 percent), the lack of overseas information on applicants (59 percent), and the lack of documents provided by applicants (54 percent). According to the Asylum Division, given the preponderance of evidence standard, it is possible to have reasonable doubts about whether applicants meet the definition of a refugee and still correctly find that applicants met their burden of establishing that their claim is true. Supervisory asylum officers also reported difficulties in carrying out their fraud-related responsibilities. Supervisors are responsible for identifying possible fraud trends as they review asylum cases from multiple asylum officers and can identify patterns individual asylum officers may not observe. Nevertheless, in responding to our survey, 54 percent of supervisors (21 of 39) reported it moderately or very difficult to identify emerging trends in fraud in the time they have available. While asylum fraud presents a challenge to officers, its full extent is not known and is being systematically assessed for the first time. In March 2005, FDNS undertook an Asylum Benefit Fraud Assessment because, according to FDNS, asylum was a benefit program historically considered to be one of USCIS’s most fraud-prone or high-risk programs and reliable and comprehensive information about the types and prevalence of fraud in asylum applications was not available. The assessment randomly sampled 239 affirmative asylum applications filed during a 6-month period in 2005 that were either issued a final decision or placed on hold. FDNS Immigration Officers (FDNS-IO) were required to conduct a series of identity and security checks, some of which were mandatory at the time of adjudication, as well as additional checks and research. The FDNS-IOs also requested overseas document verification when they believed relevant information that would substantiate a fraud finding could be obtained. However, according to FDNS officials, as of July 2008, FDNS had not received a response on almost half of the 72 documents it sent overseas for verification. Other factors can also make it difficult to assess the full extent of asylum fraud. As FDNS officials explained, an asylee’s petition for overseas relatives to join the asylee in the United States—a process that can take years—can reveal that the stories of persecution an asylee presented when applying for asylum are not always consistent with information later provided by his or her relatives. The Director of Fraud Prevention Programs in State’s Bureau of Consular Affairs confirmed that the petition process has uncovered information that clearly demonstrated that the principal applicant’s asylum claim was fraudulent, including cases where personnel reached this conclusion after interviewing relatives or conducting investigations. As of July 2008, FDNS had not finalized the Asylum Benefit Fraud Assessment and had not decided whether to do so without information from approximately 10 asylum termination interviews to be completed as a result of information obtained during the Asylum Benefit Fraud Assessment. In April 2005, the Asylum Division conducted a review of past cases in which asylum applicants had alleged ties with terrorists or engaged in terrorist activity, although ties may not have been known at the time of adjudication. As a result, it identified vulnerabilities to terrorism in the U.S. asylum system and found that many of the vulnerabilities it identified were resolved with the 1995 asylum reforms. However, the report also identified vulnerabilities that remained postreform, including the lack of checks to identify individuals, the possibility of an applicant’s interpreter perpetrating fraud, and vulnerabilities outside of its control. According to information the Asylum Division provided, it has since taken steps to address vulnerabilities within its control. Some of these, and other mechanisms it has designed, can help asylum officers identify fraud and assess credibility—specifically, identity and security check requirements, fraud prevention teams with anti-fraud responsibilities, monitoring applicants’ interpreters during asylum interviews, and tracking preparers suspected of fraud. However, because the extent of asylum fraud and how it has changed over time is not known, it is difficult to assess the effect of these measures on the identification of fraud. Furthermore, it is difficult to know the extent to which any of these measures have deterred fraud. Identity and security checks. Security check requirements have increased, particularly since the terrorist attacks of September 11, and asylum staff at all levels viewed them as helping ensure the integrity of asylum decisions. Asylum officers are required to ensure multiple identity and security checks are conducted to confirm an applicant’s identity, identify applicants who pose a national security or public safety risk, and resolve certain eligibility issues. Asylum officers must confirm that all checks are initiated prior to issuing a decision and obtain cleared results of checks before issuing a grant of asylum. Some identity and security checks are automatically initiated immediately after an applicant files an asylum application at the USCIS Service Center; however, many are initiated and completed during the adjudication process. According to Asylum Office Directors, in six of the eight Asylum Offices, asylum officers conduct the required identity and security checks for the cases they adjudicate. In the remaining two offices, asylum officers are designated, on a rotational basis, to conduct identity and security checks for all asylum officers in the office; however, each asylum officer must still review the results of these checks for each case he or she is assigned. Each supervisor must confirm that documentation of the security checks is accurately completed in accordance with the decision being issued before signing off on the decision. The Asylum Division has worked with other federal entities to provide asylum officers with access to databases to conduct identity and security checks and expanded and implemented new identity and security check requirements on existing checks. Some of these checks are required for all USCIS adjudications; others are required only in an asylum adjudication. Since that time, the Asylum Division has required asylum officers to check the following databases: Interagency Border Inspection System (IBIS). IBIS is a multi-agency database aimed at improving border enforcement and facilitating inspections of applicants for admission into the United States by identifying threats to national security or public safety. The database interfaces with another system that includes law enforcement data, including information on immigration law violators, individuals with a criminal history or who are subject to criminal investigations, or suspected terrorists. U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT). In 2004, the Asylum Division and the Department of Homeland Security’s US- VISIT office worked together to develop a mechanism to provide asylum officers with access to the US-VISIT database through a web-based interface tool, US-VISIT SIT. US-VISIT collects, maintains, and shares biometric and other information on certain foreign nationals entering and exiting the United States. This tool allows asylum officers to ensure that the applicant is not identified by US-VISIT as a national security or public safety threat and that the applicant who appears for the interview is the same individual who appeared earlier for fingerprinting. State’s Consular Consolidated Database (CCD). The Asylum Division worked with State to provide asylum officers with access to information in the CCD, which contains records about visa applications. The database may contain biometric data and copies of information an applicant presented to a State Consular Officer when applying for the visa. Such data may be valuable to asylum officers in providing information about the identity, previous travel history, method of entry into the United States, or background of an asylum applicant. The Asylum Division has also made several changes expanding the requirements for several existing identity and security checks, including the following: Since February 2003, asylum officers have been required to check the name of every asylum applicant against the Deportable Alien Control System (DACS) prior to adjudicating an asylum case in addition to when the asylum application was filed. DACS contains records of individuals who have been detained by ICE or placed in removal proceedings. Prior to 2003, asylum officers completed this check when the asylum application was filed and repeated the check only for cases resulting in a grant of asylum. Since November 2006, asylum officers have been required to verify that every asylum applicant aged 14 to 75 has been fingerprinted prior to the asylum interview in addition to obtaining fingerprint results prior to issuing a grant of asylum. This gives the asylum officer the opportunity to review any available information associated with the applicant’s fingerprint records prior to or at the time of the interview. The asylum officer must reschedule the interview if an applicant has failed to be fingerprinted in advance. Prior to this change, the requirement called only for asylum officers to obtain fingerprint results in cases that resulted in a grant of asylum. When an asylum application is filed, RAPS automatically requests an FBI name check for every applicant aged 14 to 79. Since 2002, asylum officers have been required to obtain the results of the FBI name check before granting asylum. Prior to 2002, this check was initiated at the time an applicant filed an asylum application, but obtaining a result was not necessary before issuing a grant. Since 2005, RAPS has automatically initiated FBI name checks on aliases, maiden names, and alternate dates of birth. Seventy-eight percent of asylum officers who responded to our survey reported that, overall, they found that identity and security checks were moderately or very useful in identifying or providing information on individuals who pose a risk to national security or public safety. Moreover, as figure 7 shows, the majority found each of the required identity and security checks to be moderately or very useful. However, as noted by an asylum officer we interviewed, applicants who enter the United States without inspection or who use a false identity are less likely to be identified by these checks. According to the Asylum Division, checking applicants in the US-VISIT system had significantly mitigated existing vulnerabilities in the program by locking in an applicant’s identity through fingerprint and photograph at the earliest point possible in the application process and searching against other biometric databases to confirm an applicant’s identity and identify potential derogatory information. The Asylum Division further noted that, while the US-VISIT system is vast, it is not an exhaustive warehouse of biometric prints obtained by all U.S. government agencies or by other governments. According to an Asylum Office Director, between fingerprints, FBI name checks, IBIS, US-VISIT, and CCD, it would be reasonable to expect that a national security risk would be identified if such information were contained in one of the security databases. In addition, 72 percent of asylum officers who responded to our survey reported that, overall, they found identity and security checks to be moderately or very useful in providing information regarding an applicant’s eligibility for asylum. At least half of the respondents found each of the required checks to be useful for this purpose, as shown in figure 8. Fraud-prevention teams. Asylum Offices have fraud-prevention teams comprised of at least one FDNS immigration officer (FDNS-IO) and one Fraud Prevention Coordinator (FPC) who are tasked with anti-fraud responsibilities. USCIS assigned responsibilities to FDNS-IOs at Asylum Offices that included tracking fraud patterns for FDNS, apprising Asylum Offices of fraud trends, resolving national security “hits,” addressing fraud- related leads provided by asylum officers, liaising with law enforcement entities, and referring cases of suspected fraud to ICE. FDNS-IOs are precluded from performing routine functions associated with the adjudication process, such as the resolution of non-national-security- related background checks or the review of suspect documents. The work of FDNS-IOs is directed by local Asylum Office management and by FDNS headquarters. As of July 2008, a total of 14 FDNS-IOs were located in Asylum Offices and, according to Asylum Division staff, additional positions may be authorized in the future. In addition, each Asylum Office has at least one FPC, usually a supervisory asylum officer with additional fraud-related responsibilities as a collateral duty. Duties vary and are directed by local office management. FPCs may work closely with FDNS- IOs but, in contrast to FDNS-IOs, FPCs have a direct role in supporting asylum officers in their adjudication decisions. According to an official in the Asylum Division’s Operations Branch, FPCs may also review fraud referrals that asylum officers make to the FDNS-IOs to ensure quality and determine fraud trends. The specific tasks performed by anti-fraud staff and who performs them varied across offices. For example, either a FDNS-IO or a FPC might prescreen applications for fraud indicators, coordinate requests for document verification or overseas information, track interpreters or preparers suspected of fraud, communicate fraud trends to asylum officers, review national security “hits,” participate in or communicate with interagency task forces, or provide fraud-related training to office staff. Interpreter monitors. In an effort to combat the Asylum Division’s concern regarding fraud and quality of interpretation among some of the interpreters that non-English speaking applicants are required to bring to their interview, the Asylum Division began phasing in the use of contracted telephonic interpreter monitors in the first half of 2006. According to the Asylum Division’s 2003 report on its interpreter monitoring pilot program, investigations revealed that interpreters were engaging in fraudulent behavior, such as altering asylum applicants’ testimony and coaching applicants during interviews. In May 2006, the Asylum Division reported that the interpreter monitoring contractor had been unable to accommodate 11 to 13 percent of requests for interpreter monitors in March and April 2006, and thus did not meet its goal to provide monitors 90 percent of the time. The interpreter monitoring program was intended as an interim step in combating interpreter fraud and ensuring accurate interpretation in the interview. USCIS plans to issue a rule that would require the Asylum Division to provide professional interpreters. According to Asylum Division officials, the Asylum Division has prepared a request for a multiple-award contract for interpreter services and expects to have the contract in place by the end of September 2008. As such, it would curtail its approach of monitoring applicants’ interpreters. Nevertheless, asylum staff indicated that interpreter monitors have improved the interviewing process and helped combat fraud. After an initial assessment of the interim project in May 2006, the Asylum Division concluded that monitors were successful in assisting asylum officers in obtaining information from applicants and deterring fraudulent interpreters, although the deterrent effect could not be quantified. Asylum officers who responded to our survey also viewed interpreter monitoring as successful in combating interpreter fraud and helping genuine refugees communicate their claim. Specifically, 87 percent indicated that interpreter monitors were very or moderately useful in deterring interpreters from intentionally misinterpreting, while 55 percent reported that it is very or moderately easy to identify interpreter fraud when interpreter monitors are used. Eighty-two percent reported that interpreter monitors were very or moderately useful in helping genuine refugees clearly communicate their claim and avoid misunderstandings due to poor interpretation. Tracking preparers. To help identify applications completed by suspicious fraudulent preparers, in July 2007, the Asylum Division began systematically tracking information on the preparer of each asylum application. The Asylum Division noted the difficulties of addressing fraud perpetrated by preparers—that is, individuals who assist asylum applicants with the completion of their asylum applications. Preparers of fraudulent claims have been known to produce applications containing, for example, false claims of persecution and coach applicants on how to exploit the sympathies of asylum officers. The Asylum Division issued guidance in July 2007 that introduced new procedures to collect preparer information and instructed asylum officers to verify that USCIS Service Centers entered preparer information into RAPS. The guidance instructs asylum officers to gather information during the interview regarding the circumstances under which the application was prepared, including who prepared the application. According to the Asylum Division’s Deputy Chief at the time of our review, the division anticipates this effort will allow the collection and analysis of data on preparers and could help identify applications prepared by preparers determined to be fraudulent. Other planned initiatives. The Asylum Division is also considering various other anti-fraud efforts that are in various stages of planning and have been highlighted as key initiatives for fiscal year 2008. These plans include working with the Department of Defense to develop a systematic way of processing fingerprints for asylum seekers through Department of Defense systems, using software to scan applications to identify common text and data, and furthering the exchange of information with Canada and other countries. With respect to exchanging information with Canada, for example, USCIS is exploring the feasibility of systematically exchanging with Canada information submitted by asylum seekers. Federal entities outside the Asylum Division and FDNS also have a role in combating fraud and confirming the validity of claims, but their assistance to asylum officers has been hindered by a lack of resources, competing priorities, and—in some cases—confidentiality requirements intended to protect asylum applicants and their families. ICE’s Forensic Document Laboratory (FDL). Although the majority of asylum officers who responded to our survey reported it was difficult for them to identify fraudulent documents, the ability of the federal government’s forensic crime laboratory dedicated to detecting fraudulent documents to assist asylum officers has been hindered by competing priorities and a lack of exemplar documents—an authentic travel or identity document FDL uses to make comparisons in forensic examinations. Due to resource limitations, FDL prioritizes cases so that those in which the individual is detained receive highest priority and requests from asylum officers—those involving individuals with no set court date—are among those receiving the lowest priority. According to FDL data for fiscal year 2007, when FDL responded to requests from Asylum Offices and other entities with the same relative priority, it took an average of 122 days, with response times ranging from 1 to 487 days. Because the Asylum Division recognizes it is unlikely that FDL will respond to a document examination request before an adjudication decision must be made given asylum processing time requirements, asylum officers are instructed to submit documents to FDL if they or their supervisor believe that the analysis may change the outcome of the decision. In addition, according to FDL’s Unit Chief, FDL often does not have the kinds of documents asylum seekers often submit to asylum officers to support their claims, such as overseas birth certificates, marriage certificates, or medical records. Recognizing that FDL’s ability to verify certain types of documents is hindered by its lack of relevant exemplars, the Asylum Division requires each Asylum Office to have at least two FDL- trained staff to provide training to other officers, as discussed earlier in this report. In addition to assisting officers in identifying fraudulent documents, these staff are to advise officers on whether FDL is likely to have an exemplar for the document they want verified. State and USCIS overseas offices. State and USCIS overseas offices can play an important role in helping asylum officers distinguish between genuine and fraudulent claims by providing overseas information on asylum applicants and their claims. According to USCIS officials we interviewed, overseas investigations may be one of the best methods of verifying the facts alleged by applicants. A FDNS official further explained that far more fraud could be uncovered if more work were conducted overseas, and this is especially true for asylum cases. State and USCIS may be able to provide information on a particular asylum case by verifying or obtaining overseas documents, such as medical or employment records, by providing an assessment of the accuracy an applicant’s assertion about country conditions or the applicant’s situation, or by engaging in investigations. In our surveys we asked asylum officers and supervisory asylum officers about the usefulness of obtaining overseas medical and employment records, or verification of such records. Eighty-two percent of asylum officers and 90 percent of supervisory asylum officers (36 of 40) who responded thought that these records or verification of these records would be moderately or very useful in adjudicating cases. Several respondents explained that overseas information can help them verify accounts of medical treatment, encounters with police or military forces, or political affiliation that relate to an applicant’s claim. Furthermore, about half the asylum officers indicated they needed, but did not have such information (53 percent for medical records; 49 percent for employment records) in about half or more of the cases they adjudicated in the past year. Preliminary results of the Asylum Benefit Fraud Assessment further suggest the value of obtaining overseas information on asylum applications. USCIS reported preliminary data in January 2007 that indicated they received an overseas response for 13 requests for document verification, of which 9 were found to support a finding that the asylum claim was fraudulent. USCIS’s Refugee, Asylum, and International Operations Directorate, of which the Asylum Division is a component, proposed creating 54 new positions in fiscal year 2008, including positions in several overseas posts to be focused on fraud detection and national security. The fiscal year 2008 plan requires approval from State for the overseas positions. As of July 2008, State had approved most of them. According to the Asylum Division’s Acting Deputy Chief, the increased capacity may enable increased assistance to asylum officers, although it is too early to determine to what degree. Despite adjudicators’ views that overseas information can be useful, overseas offices’ ability to respond to asylum officers’ requests for assistance has been hindered due to competing priorities, resource constraints, and challenges associated with respecting the confidentiality of asylum applicants to avoid placing them or their relatives at a greater risk of harm. Recognizing demands on overseas resources, the Asylum Division instructs asylum officers to limit overseas requests to those cases where such information is essential to making a final asylum determination. According to State procedures, in answering requests for specific information on asylum cases, it generally gives priority to requests from immigration courts over requests from asylum officers, although it also gives priority to requests from Asylum Division headquarters regarding sensitive cases. A Deputy Director within State explained that requests from immigration courts are given higher priority because individuals appearing before an immigration judge face the possibility of deportation. Confidentiality requirements designed to protect applicants’ safety can further constrain obtaining overseas information because making such inquiries of agencies of foreign governments can put asylum applicants or their families at risk by releasing information to the public or alleged persecutors. These limitations notwithstanding, USCIS and State have worked together to improve asylum officers’ access to information regarding asylum applicants’ visa applications. Since May 2006, asylum officers have had access to State’s CCD. Because not all information included on the visa application is captured in CCD, the Asylum Division issued procedures in March 2007 on how asylum officers can request full visa application information from State if such additional information is material to the applicant’s claim and can influence the adjudication decision. These procedures were disseminated close to the time we conducted our surveys. Ninety-one percent of asylum officers and 93 percent of supervisory asylum officers (37 of 40) who responded thought that having the entire visa application in addition to what is available in CCD would be moderately or very useful in adjudicating cases. Fifty-five percent of asylum officers said they needed, but did not have, the entire visa application in about half or more of the cases they adjudicated in the past year. The process for requesting overseas information on asylum applicants, other than visa applications, varies among Asylum Offices, and USCIS and State have been working on improving procedures for making these requests. According to Asylum Division officials, asylum officers are to consult their supervisors when they desire overseas assistance. However, the process for initiating requests for overseas information—that is, whether requests are made through FDNS or Asylum personnel—varies among Asylum Offices. Seventy-four percent of asylum officers and 43 percent of supervisors (16 of 37) said they did not understand or had no more than a slight understanding of the process their office used for requesting overseas verification services. In June 2008, the Asylum Division’s Acting Deputy Chief told us that USCIS and State were in the process of developing procedures that streamline the current process for requesting overseas assistance. According to an Operations Branch official, once the procedures are updated, the Asylum Division plans to provide training on the new procedures. ICE’s Office of Investigations. USCIS and ICE have an agreement that USCIS will refer articulated suspicions of fraud to ICE, which will make a decision whether to accept or decline a case for investigation within 60 days. If ICE declines a case or does not respond within that time, the FDNS-IO is responsible for taking further action. According to the Chief of FDNS, ICE declines about two-thirds of FDNS’s requests for investigation. FDNS data showed that of the 58 requests that FDNS-IOs at Asylum Offices sent to ICE for investigation during fiscal year 2007, ICE had declined 33, accepted 12, and had not made a decision to accept or decline 13 requests as of July 2008. According to the Acting Chief of ICE’s Identity and Benefit Fraud Unit, ICE data showed that in fiscal year 2007, ICE opened 128 asylum fraud investigations, 70 of which were based on USCIS referrals. ICE investigations of asylum fraud can result from fraud referrals made by confidential informants, federal and local law enforcement personnel, as well as USCIS personnel. USCIS referrals can include those from asylum officers, FDNS-IOs, or district examiners. However, officials from the Identity and Benefit Fraud Unit explained it is difficult for ICE to identify the exact number of asylum fraud investigations because of the way a case may be recorded. Cases involving asylum fraud often involve other types of fraud, such as identity or marriage fraud, and may be recorded under a category relating to other fraud found in the investigation. Furthermore, a single conviction may involve an individual who was associated with numerous sham asylum applications. According to ICE officials, resource constraints preclude ICE from investigating all fraud referrals. Asylum fraud is difficult to investigate and resource-intensive because asylum claims often lack supporting evidence to facilitate investigations, according to the Acting Chief of ICE’s Identity and Benefit Fraud Unit. These investigations can take several years to complete. According to ICE, its investigations of asylum fraud most often target larger-scale conspiracies, and individual applicants are given a lower priority. ICE also gives asylum cases special attention when asylum applicants from certain countries might pose a threat to national security. Asylum Division personnel and Asylum Office fraud staff view ICE investigations of asylum fraud as a critical component in combating asylum fraud and, along with prosecutions, the best way to deter it. At one Asylum Office we visited, the Director stated that she believed prior local ICE activity had deterred fraudulent asylum applications. Furthermore, 42 percent of 107 asylum officers and 45 percent of 33 supervisory asylum officers who provided survey comments regarding what can be done to deter or cut down on asylum fraud shared views that actions such as investigating, prosecuting, or assigning penalties are needed to help deter fraud. In March 2006, we reported that taking appropriate and consistent actions against immigration benefit violators is an important element of fraud control and deterrence. Representatives from these and other federal entities outside USCIS participated in the Asylum Division’s Fraud Prevention Conference in December 2007, where conference leaders acknowledged that combating fraud requires both intra- and intergovernmental efforts. They further stressed the importance of finding administrative solutions to fraud, as prosecutions of asylum fraud are infrequent. The conference provided a forum for fraud detection and prevention personnel, investigators, attorneys, and personnel from USCIS, ICE, DOJ, and State to share information on current fraud trends, including specifics on suspected preparers who assisted, recruited, and sometimes duped clients to make fraudulent claims; methods used to make fraudulent claims; and indicators used to detect fraud. The majority of asylum officers and supervisory asylum officers who responded to our survey reported that the 4 hours, on average, they have to complete an asylum case is insufficient to be thorough—that is, to complete the case in a manner consistent with their procedures manual and training—although views among managers varied. The 4-hour average is based on the productivity standard, which requires management to assign asylum officers work equivalent to 18 asylum cases in a 2-week period, allowing for 4 hours of training each week. The productivity standard is one of the elements in the asylum officers’ performance work plan that is used to rate an asylum officer’s performance. As table 2 shows, 28 percent of asylum officers and 28 percent of supervisory asylum officers (11 of 40) reported that asylum officers need about 4 hours or less to complete an asylum case. However, 65 percent of asylum officers and 73 percent of supervisors (29 of 40) reported that asylum officers needed more than the standard 4 hours to complete a case. Many asylum officer survey respondents indicated time constraints hindered their ability to thoroughly adjudicate cases. For example, of 138 respondents who provided narrative comments explaining how they manage their caseload when they have insufficient time, 39 percent wrote that they rush through their work or cut back on doing country-condition research, interviewing, completing identity and security checks, or writing the assessment. Moreover, 43 percent of asylum officers reported that, during the past year, productivity standards hindered their ability to properly adjudicate in about half or more of their cases. Asylum Officers are taught at AOBTC that they must work under time constraints and develop interviewing skills that will enable them to gather all the information they need. However, they are also informed of the danger in rushing through an interview, which could lead the asylum officer to incorrectly assess credibility. In conducting interviews with asylum officers, we asked 12 officers how much time they spend conducting an asylum interview. Eleven of the 12 asylum officers said that, of the 4 hours they generally have available for a case, they typically spend between 1 and 2 hours conducting the applicant interview. Nearly 30 percent of asylum officers reported that they were able to elicit sufficient information in asylum interviews to properly evaluate the claim no more than about half of the time. Further, 92 percent reported that having more time to probe in an interview would moderately or greatly help them elicit better information during the asylum applicant interview to properly evaluate the claim, including assessing the applicant’s eligibility and credibility. The same percentage reported that having more time to prepare for and conduct research prior to an interview would moderately or greatly help them elicit better information during the asylum applicant interview to properly evaluate the claim, including assessing the applicant’s eligibility and credibility. In addition, when asked what would help them better identify fraud, asylum officer survey respondents who provided comments most frequently said that having more time would help them better identify fraud, with about 40 percent of the 98 respondents making such comments. For example, an asylum officer explained that although more tools to detect fraud are always useful, they are of little or no use if asylum officers are not given either time or correct training to use such tools. Another asylum officer stated that attorneys and preparers know that asylum officers do not have time to check into the claim and, thus, the “system is perpetuating fraud by not giving time to concentrate on the adjudication.” Sixty-four percent of supervisors (25 of 39) who responded to our survey indicated that asylum officers were not always completing a fraud referral sheet when they should in at least some of the cases they reviewed. Of the 111 asylum officer survey respondents who explained what prevented them from referring suspected fraud cases to their FDNS-IO, about half attributed this to time limitations. One of USCIS’s strategic goals is to combat fraud and, in 2007, the Asylum Division included measures for combating fraud in its performance work plan for supervisory asylum officers; however, it has not explicitly included measures for combating fraud in its performance work plan for asylum officers. Several asylum officers who provided comments to our survey explained that a disincentive exists to take the time to make a fraud referral when they suspect fraud. According to one asylum officer, there is no reason to make a fraud referral because the performance work plan does not reward it and it takes a lot of extra time. In October 2007, in response to a DHS Office of Inspector General recommendation, USCIS stated it plans to ensure that adjudicators’ performance work plans include a measure for fraud detection—a recommendation that USCIS concurred with and plans to address. We reported in March 2003 that organizations should align individual performance expectations with organizational goals to improve performance by helping personnel connect their daily activities and organizational goals and encourage them to achieve these goals. As such, we agree with the Office of the Inspector General’s recommendation. Asylum Division officials told us in July 2008 that they had begun discussing this recommendation with USCIS and were looking to USCIS to take the lead on addressing the issue. Although most asylum officers and supervisors reported that asylum officers needed more time to thoroughly adjudicate asylum cases, management views were mixed. Five of the eight Asylum Office Directors said that they believed that the 4 hours asylum officers are given to complete a case was not reasonable, while three considered it to be reasonable. One Asylum Office Director elaborated that asylum officers do not struggle to complete cases within 4 hours, in part because the majority of the asylum cases adjudicated in that office are older cases that are usually easier to adjudicate. This Director explained that older cases may be easier if country conditions have changed so dramatically over time that the asylum claim can no longer be sustained, or if they were filed before asylum reform solely for the purpose of obtaining employment authorization. In the latter case, often the asylum claim is easily determined to be unsupported. The Asylum Division recognizes that asylum officers must work under time constraints and that the tasks and time involved in completing a particular case may increase due to factors such as a complicated story that takes additional time to fully elicit or several dependents being listed on an application. Nevertheless, the Chief of the Asylum Division said that the 4-hour average was sufficient and was not convinced that more time would lead to increased adjudication quality. Also, asylum officer performance work plan guidance states that under extenuating circumstances, supervisors can allow asylum officers to take more than the 4 days they typically have to provide their written decision to their supervisor. Since 1999, asylum adjudication requirements have increased while the productivity standard, which was established without empirical data, has remained unchanged. As previously discussed in this report, since September 11, 2001, the Asylum Division added requirements for asylum officers to check additional identity and security databases and increased the procedural requirements regarding when to conduct certain checks. The Asylum Division estimates that 10 percent of asylum officers’ time would be needed to conduct security checks and, in 2004, began including this in making staffing projections. We discussed this projection with five of the eight Asylum Office Directors we interviewed. Three of the five Asylum Office Directors estimated that asylum officers spent 10 percent of their time conducting security checks, and two stated that asylum officers spent more than 10 percent of their time conducting these checks. According to one of these Directors, 30 minutes of the 4 hours asylum officers generally have to complete a case is needed to conduct identity and security checks, and a QA/T explained that if the results of an identity and security check identify a potential concern, resolving that concern can add an hour to the adjudication process. In addition, beginning in 2007, asylum officers have been required to confirm that Service Center staff entered preparer information in RAPS and question the applicant if no information was entered, as well as contact, swear in, and document contact with interpreter monitors. Furthermore, all of the eight Asylum Office Directors we interviewed commented that, over time, asylum cases have become more complex or that requirements for completing cases have increased. According to one Asylum Office Director, while tools have been provided to deal with the increased complexity of fraud in asylum applications, the asylum officers have not been given more time to use these tools. Although the overall caseload of the Asylum Division has steadily declined from about 450,000 cases in 2002 to 83,000 cases in 2007, this has not translated into asylum officers having more time to adjudicate asylum cases. If local management is not able to assign asylum officers 18 asylum cases per 2-week period, it assigns asylum officers other Asylum Division work that is equivalent to 18 asylum cases to compensate for fewer asylum cases. Given the other work assigned, asylum officers continue to have an average of 4 hours under the productivity standard to complete each asylum case assigned. According to six of the eight Asylum Office Directors, asylum officers in their offices are generally being assigned work that equates to 18 asylum cases—that is, they are assigned asylum interviews in combination with other work. To compensate for being assigned fewer than the 18 asylum cases, asylum officers adjudicated nonasylum cases (i.e., credible fear, reasonable fear, and NACARA cases) and performed additional work such as administrative closures and research projects. Because of the decline in overall caseload, the Asylum Division plans to take on new responsibilities for asylum officers that are similar to their current work, such as adjudicating Refugee/Asylum Relative Petitions. The adjudication of these petitions will be modeled on the current asylum adjudication process, requiring interviews and checks against US-VISIT, in addition to the other mandatory identity and security checks. The division also increased the number of asylum officers assigned to overseas details from about 12 officers in 2007 to 40 officers in 2008. According to the Asylum Division Chief, the productivity standard was established in 1999 as the result of discussions with the asylum officers’ union and management’s judgment. The Asylum Division had not conducted a time study or gathered empirical data. At that time, the productivity standard was reduced from 24 cases to 18 cases in a 2-week period. Asylum Division training materials explain that the productivity standard helps the Asylum Division achieve its mission of protecting refugees, while meeting quality and timeliness goals. Further, if the productivity standard is set too low, the Asylum Division would not have a reasonable ability to keep pace with new receipts given the staff available, whereas if it is set too high, the quality of adjudications would likely suffer. Asylum Division officials further explained that setting the productivity standard too low could result in adjudication delays that might encourage spurious filings of asylum applications for the purpose of obtaining employment authorizations. In May 2003, we reported that time studies, in general, have the substantial benefit of providing quantitative information that can be used to create objective and defensible measures of workload and can account for time differences in completing work that can vary in complexity. However, such studies do place some burden on personnel during data collection and involve other costs as well. Without empirical data on which to base the asylum officer’s productivity standard, the Asylum Division is not in the position to know whether asylum officers have sufficient time to conduct thorough asylum adjudications. Immigration judges’ responses to our survey indicated that key factors affecting their adjudications were similar to those that asylum officers identified. As shown in figure 9, of the 11 aspects of adjudicating asylum cases we inquired about in our survey, immigration judge survey respondents most frequently cited verifying fraud (88 percent) as a moderately or very challenging aspect of adjudicating asylum cases. The vast majority also reported time limitations (82 percent) and assessing credibility (81 percent) as moderately or very challenging. Most immigration judges who responded to our survey identified fraud and assessing credibility as significant challenges in adjudicating asylum cases. In assessing an applicant’s eligibility for asylum, immigration judges consider adverse factors, including the use of fraud to gain admittance to the United States and inconsistent statements made by the asylum applicant. As an immigration judge respondent explained, “it is very easy to suspect fraud, but as in all civil cases, fraud is one of the most difficult things to actually prove. Unless the DHS . . . can prove fraud by a preponderance of the evidence, or a respondent admits facts constituting fraud, the suspicion of fraud will remain just that.” Of the various types of fraud that we inquired about in our survey, the majority of immigration judges who responded reported that all the types of fraud were moderately or very difficult to identify, with attorney fraud (66 percent) and identity fraud (66 percent) most frequently reported as difficult to identify, as shown in figure 10. Of the various types of fraud that we inquired about in our survey, the majority of immigration judges reported that all the types of fraud presented a challenge in at least some of the cases that they adjudicated over the last year. For example, as shown in figure 11, in at least some of the cases they adjudicated over the past year, 94 percent of immigration judges reported that suspected fraud in the claim presented a challenge, and 93 percent reported that suspected document fraud presented a challenge. Most immigration judges who responded to our survey also reported assessing credibility as a challenging aspect of adjudicating asylum cases. In each decision, an immigration judge must include a detailed credibility finding. Eighty-one percent of immigration judges reported that assessing credibility was a moderately or very challenging aspect of adjudicating asylum cases and an area in which they needed additional training. Further, 48 percent of immigration judges cited assessing credibility as one of their top three greatest challenges in adjudicating asylum cases. In addition, a NAIJ representative stated that assessing credibility is very difficult and that immigration judges would be better able to explore issues relevant to credibility if they had more time in court to review testimony and evidence. The majority of immigration judges who responded to our survey reported impediments to assessing credibility in about half or more of the cases they adjudicated over the past year, including a lack of documentary evidence (70 percent), lack of other overseas information on applicants (61 percent), and lack of document verification from overseas (56 percent), as shown in figure 12. An immigration judge survey respondent shared the view that because of a high level of fraud and abuse in asylum cases and in the process, any case-specific evidence the ICE trial attorneys could present to prove or disprove an asylum applicant’s case would be extremely useful in trying to reach a fair and just result for the parties. Although lack of overseas information was reportedly an impediment to immigration judges’ ability to assess credibility, according to EOIR, it is the role of the ICE trial attorney or the asylum applicant to gather information from overseas agencies and verify the authenticity of documents. In response to reforms directed by the Attorney General in 2006, EOIR designed the Fraud and Abuse Program that established a formal procedure for immigration judges, BIA members, and other EOIR staff to report suspected instances of immigration fraud or abuse. Prior to the implementation of this new program, immigration judges reported suspected fraud on an ad hoc basis primarily through management channels or EOIR’s Attorney Discipline Program. The goals of the Fraud and Abuse Program include protecting the integrity of proceedings before EOIR; referring, where appropriate, information to either law enforcement or a disciplinary authority; encouraging and supporting investigations and prosecutions; and providing immigration judges, BIA members, and other EOIR staff with source materials to aid in screening for fraudulent activity. According to EOIR, the program improves immigration judges’ ability to identify fraud by providing examples of prevalent forms of fraud and abuse and suggestions for the screening of boilerplate claims and common addresses. The program issues a monthly newsletter conveying such information and has also established a Web site. EOIR provided some training on the Fraud and Abuse Program. Although the majority of immigration judges who responded to our survey reported being somewhat or not at all familiar with EOIR’s new Fraud and Abuse Program, EOIR was in the process of informing immigration judges of this program when we conducted our survey during May through July 2007. As of July 2008, the manager of the program had conducted presentations at 26 immigration courts and at the annual immigration judge conference. A NAIJ representative stated that the Fraud and Abuse Program presentation that immigration judges received during the annual immigration judge conference was useful, but because the program was new, NAIJ had not received feedback from immigration judges indicating their use of the program. According to EOIR, the Fraud and Abuse program is tracking all incoming referrals. As of July 2008, the program had received 132 referrals, including referrals of suspected asylum fraud and document fraud. Twenty-six of the 132 referrals were made by immigration judges. Patterns in referrals will be used to alert EOIR staff and other entities to fraud schemes. According to EOIR, as the Fraud and Abuse Program is relatively new, it remains flexible to respond to agency needs. The program has surveyed staff who attended the presentations at immigration courts about additional services they would like from the program and, according to EOIR officials, solicited immigration judges’ input in July 2008 to develop ideas for additional training, among other things. EOIR officials stated that the Fraud and Abuse Program Manager has developed internal benchmarks of performance to assess the program, such as responding to all referrals within 5 days and reviewing inactive cases every 60 days. Most immigration judges who responded to our survey reported time constraints as a challenge in adjudicating asylum cases, and EOIR has taken some steps to mitigate these challenges. Specifically, 82 percent of immigration judges who responded to our survey reported that time limitations were moderately or very challenging in adjudicating asylum cases and 77 percent reported that managing caseload was moderately or very challenging. The fact that the growth in the number of onboard immigration judges has not kept pace with overall growth in caseload and case completions, which include asylum cases, may contribute to this challenge. While, from fiscal years 2002 through 2007, the number of onboard immigration judges increased by 2 (from 214 to 216 immigration judges), caseload, which includes newly filed and reopened cases and cases pending from prior years, rose 14 percent from about 442,000 to about 506,000 and completions rose 20 percent from about 274,000 to about 328,000. The average caseload per onboard immigration judge rose 13 percent from 2,067 cases in fiscal year 2002 to 2,343 cases in fiscal year 2007. According to a NAIJ representative, time constraints can have an effect on the quality of decisions. The representative further explained that if a case needs to be delayed or rescheduled, it may be rescheduled as much as a full year later because of the volume of cases on an immigration judge’s schedule. According to EOIR, both an immigration judge’s overall caseload and the way the immigration judge manages that caseload affect the pressures the immigration judge experiences on the bench. A heavy caseload may limit an immigration judge’s ability to manage comfortably. Nearly all immigration judge survey respondents also reported needing more than the 4 hours off the bench that is provided for them to handle administrative matters. Fifty-two percent reported that they need more than 8 hours per week for administrative tasks and 45 percent said they need about 5 to 8 hours. Sixty-nine percent reported that they did not use their administrative time as intended about half the time; instead, they used that time to hear cases. According to EOIR, EOIR monitors the caseload of each immigration court to identify courts that have been unable to meet their established goals for timely case adjudication and provides assistance to help those courts meet these goals. In 2006, we reported that EOIR helped courts address growing caseloads by detailing immigration judges, using technology, transferring responsibility for hearing locations, and establishing new courts. EOIR informed us in 2007 that it continues to employ these mechanisms. In 2007, when we surveyed immigration judges, 57 percent reported that having a visiting immigration judge detailed to their court somewhat or greatly helped their ability to manage their caseload, and 40 percent reported having immigration judges from other courts hear cases via videoconference somewhat or greatly helped their ability to manage their caseload. In addition, because of the volume of cases that immigration judges handle, EOIR advises immigration judges to issue oral decisions immediately after hearing a case, deeming it generally to be the most efficient way to complete cases. However, 71 percent of immigration judges who responded to our survey reported that rendering an oral decision immediately after a hearing was moderately or very challenging. According to a NAIJ representative, oral decisions are difficult because immigration judges are trying to balance multiple tasks at once during a hearing—listening carefully to the testimony, asking follow-up questions, and applying case law correctly. Having time to reflect after listening to testimony, and having time to prepare a written decision would result in better decisions. However, according to EOIR, rendering oral decisions following a hearing allows both sides to hear the decision while the evidence is fresh in their minds and then make an informed choice whether to appeal the decision. Furthermore, immigration judges who reserve decisions for later quickly develop a backlog. According to EOIR, issues resulting from heavy caseloads are best addressed by increasing the number of immigration judges and staff available. To help immigration judges better manage their caseload, EOIR requested funding to hire 240 additional staff for immigration courts. According to EOIR, in developing its request, it considered budget guidance and the number of positions it could reasonably expect to fill in a given year. One hundred and twenty new positions were requested and funded in fiscal year 2007, 20 of which were immigration judge positions. Although EOIR requested funding for the remaining 120 positions in fiscal year 2008, it did not receive its full budget request. As a result, EOIR abolished its plans to hire an additional 120 positions in fiscal year 2008, including 20 immigration judge positions. Prior to the hiring of additional staff, most immigration judges who responded to our survey reported that having additional law clerks (98 percent), additional immigration judges (84 percent), and additional administrative court staff (77 percent) would moderately or greatly improve their ability to carry out their responsibilities. According to EOIR, as of May 2008, it was in the process of hiring approximately 38 immigration judges, comprised of newly authorized positions and replacements for attrition. Until all authorized immigration court staff are on board, it is too soon to determine the extent to which increased staffing will affect immigration judges’ ability to manage their caseload. Adjudicating asylum cases is a challenging undertaking because asylum officers do not always have the means to determine which claims are authentic and which are fraudulent. USCIS has taken steps to instill quality and strengthen the integrity of the asylum decision-making process. However, asylum officers still face adjudication challenges, including asylum fraud, lack of information from entities outside USCIS that could help assess the authenticity of claims, and increased responsibilities without additional time to carry them out. With potentially serious consequences for asylum applicants if they are incorrectly denied asylum and for the United States if criminals or terrorists are granted asylum, asylum officers must make the best decision they can within the constraints that are placed on them. The mechanisms USCIS designed to promote quality and integrity in decision making can be better utilized to decrease the risk that incorrect asylum decisions are made. Eliciting information through applicant interviews is a challenging and critical component of an asylum officer’s ability to distinguish between genuine and fraudulent claims. By supplementing existing training with additional opportunities for asylum officers to observe skilled interviewers, the Asylum Division could improve asylum officers’ ability to elicit needed information during an applicant interview to help distinguish between a genuine and fraudulent claim. In addition, by developing a framework to solicit asylum officers’ and supervisors’ views of their training needs in a structured and consistent manner, the Asylum Division would help ensure that headquarters and Asylum Offices have more complete information from which to make training decisions. Furthermore, by more fully implementing its quality review framework, the Asylum Division would be in a better position to identify deficiencies in the quality of asylum decisions asylum officers make, identify the root causes of such deficiencies, and take appropriate action, such as focusing training opportunities. Insufficient time for asylum officers to adjudicate cases can undermine the efficacy of the tools that asylum officers do have, as well as USCIS’s goals to ensure quality and combat fraud. We recognize that conducting an empirical study of the time asylum officers need to complete a thorough adjudication, including conducting increased security checks and referring instances of suspected fraud when appropriate, would involve some expenditure of resources. However, doing so would better position the Asylum Division to know whether it is providing asylum officers with the time needed to do their job in accordance with their procedures manual and training. More recent tools, such as additional identity and security check information and the placement of FDNS immigration officers in Asylum Offices can be valuable, but only if asylum officers have the time to fully utilize them. To improve the integrity of the asylum adjudication process, we recommend that the Chief of the Asylum Division take the following five actions: explore ways to provide additional opportunities for asylum officers to develop a framework for soliciting information in a structured and consistent manner on asylum officers’ and supervisors’ respective training needs, including, at a minimum, training needs discussed in this report; ensure that the information collected on training needs is used to provide training to asylum officers and supervisory asylum officers at the offices where the information shows it is needed or nationally, when training needs are common; develop a plan to more fully implement the quality review framework— and complement existing supervisory and headquarters reviews—to include, among other things, how to ensure that in each Asylum Office a sample of decisions of asylum officers are reviewed for quality and consistency and interviews conducted by asylum officers are observed; and develop a cost-effective way to collect empirical data on the time it takes asylum officers to thoroughly complete the steps in the adjudication process and revise productivity standards, if warranted. We requested comments on a draft of this report from DHS, DOJ, and State. The departments did not provide official written comments to include in our report. However, in e-mails received September 12, 2008, the DHS and USCIS liaisons stated that DHS concurred with our recommendations. DHS and EOIR provided written technical comments, which we incorporated into the report, as appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Homeland Security, the Attorney General, and the Secretary of State. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. For further information about this report, please contact Richard M. Stana, Director, Homeland Security and Justice Issues, at (202) 512-8777 or at [email protected]. GAO staff members who were major contributors to this report are listed in appendix VII. We sent our Web-based survey to all asylum officers who were in their position at the end of fiscal year 2006. We received 189 responses from asylum officers, resulting in a 74 percent response rate. To ensure survey respondents had recent knowledge about the issues our survey explored, 18 of the 189 respondents were directed to not complete the rest of survey because their responses to our initial questions indicated their primary responsibilities did not include adjudicating asylum cases or they had adjudicated no, or almost no, asylum cases over the past year. Although 171 asylum officers completed the survey, the number answering any particular question may be lower, depending on how many chose to answer any given question. In addition, for certain questions, respondents were instructed to skip particular questions based on their responses to previous questions. Each question includes the number of asylum officers responding to it. Our survey was comprised of closed- and open-ended questions. In this appendix, we include all the survey questions and aggregate results of responses to the closed-ended questions; we do not provide information on responses provided to the open-ended questions. For a more detailed discussion of our survey methodology see appendix IV. To corroborate information and pursue issues we identified during our research on the U.S. Asylum System and our interviews with agency officials, we developed and deployed three different Web-based surveys to (1) asylum officers, (2) supervisory asylum officers, and (3) immigration judges. We asked asylum officers about their views on areas including training, conducting identity and security checks, interviewing and assessing credibility, assessing fraud, country-condition information, workload, and decision making. We asked supervisory asylum officers about their views on areas including their own responsibilities and training as well as asylum officers’ responsibilities and training. We asked immigration judges about their views on areas including professional development, credibility and fraud in asylum cases, rendering an asylum decision, and caseload. GAO social science survey specialists along with GAO staff knowledgeable about asylum adjudications developed the three survey instruments. We sent drafts of the asylum officer survey and supervisory asylum officer survey to Asylum Division officials for preliminary reviews to ensure that our questions were clear and unambiguous, used clear terminology and appropriate response options, and that the survey was comprehensive and unbiased. We sent a draft of the survey of immigration judges to Executive Office for Immigration Review (EOIR) officials and Assistant Chief Immigration Judges for the same purpose. We also asked for and received comments from the asylum officer representative to the American Federation of Government Employees on the draft asylum officer survey and representatives from the National Association of Immigration Judges (NAIJ) on the draft immigration judge survey. We considered comments and suggestions from all parties and made revisions where we thought they were warranted. We conducted pretests of the three surveys to ensure that the questions were clear and concise, and refined the instruments based on feedback we received. We pretested the asylum officer survey with eight asylum officers with a range of experience levels in five different Asylum Offices. We conducted these pretests using a combination of in-person, telephone, and Web-based approaches. We conducted pretests of the supervisory asylum officer survey with three supervisors with varying levels of experience in three different Asylum Offices; all were conducted by telephone and one used a Web-based approach. We conducted pretests of the immigration judge survey by telephone with three immigration judges in three different immigration courts. To develop mailing lists for the asylum officer and supervisory asylum officer surveys, we obtained from the Asylum Division the name, Asylum Office, and e-mail address of every onboard asylum officer and supervisor along with the date each officer began his or her position. Similarly, to develop a mailing list for the immigration judge survey, we obtained from EOIR the name, immigration court, and e-mail address of every onboard immigration judge along with the date each immigration judge began his or her position. We excluded from this list Assistant Chief Immigration Judges because we provided them with the opportunity to review a survey draft and provide comments. To ensure that we solicited information only from those who had some basic level of experience on which to draw, we sent surveys only to individuals who had been in their position for at least approximately 6 months—that is, they were on board as of September 30, 2006, the end of fiscal year 2006. Two hundred fifty-six asylum officers, 56 supervisory asylum officers, and 207 immigration judges met the criterion. Asylum officer survey. We announced our upcoming Web-based survey of asylum officers on March 1, 2007, and e-mailed asylum officers a cover letter and link to the survey on March 5. The Chief of the Asylum Division also informed Asylum Office staff of our survey efforts and encouraged asylum officers to participate, noting that participation was voluntary, and directed Asylum Office management to provide all asylum officers with 2 hours of administrative time during which they could elect to complete the survey. During the period from March 5 through April 30 (the final deadline for completing the survey), we e-mailed reminder notices five times to asylum officers who had not responded, encouraging them to participate. On April 24, we followed up by telephone with 17 asylum officers—all those who had begun, but had not finished, the survey. We received 189 responses from asylum officers, resulting in a 74 percent response rate. Of the 189 respondents, 171 said that, over the past year, their primary responsibilities included adjudicating asylum cases and that they had adjudicated at least some affirmative asylum cases, which was the focus of our review. The remaining 18 asylum officers who said their primary responsibilities did not include adjudicating asylum cases or they had not adjudicated at least some asylum cases over the past year were directed to not complete the rest of survey. Supervisory asylum officer survey. With respect to the Web-based survey of supervisory asylum officers, we announced the survey on March 12, 2007, and e-mailed supervisory asylum officers a cover letter and link to the survey on March 14. As with the asylum officer survey, the Chief of the Asylum Division also informed Asylum Office staff of our survey efforts and encouraged supervisors to participate, noting that participation was voluntary, and directed Asylum Office management to provide all supervisors with 2 hours of administrative time during which they could elect to complete the survey. During the period from March 12 through May 6 (the final deadline for completing the survey), we e-mailed reminder notices four times to supervisory asylum officers who had not responded, encouraging them to participate. On April 24, we followed up by telephone with 3 supervisory asylum officers—all those who had begun, but had not finished, the survey. We received 43 responses from supervisory asylum officers, resulting in a 77 percent response rate. Of the 43 respondents, 40 said that they had reviewed at least some affirmative asylum decisions over the past year. The remaining 3 supervisory asylum officers who said they reviewed no or almost no asylum cases over the past year were directed to not complete the rest of the survey. Percentages for data from relatively small populations, such as supervisor asylum officers, may convey a level of precision that can be misleading because they can change greatly with minor changes in the data. Thus, in reporting supervisory asylum officers’ survey responses throughout this report, we generally identified the number in addition to the percentage of supervisory asylum officers who responded to a question in a particular way. Immigration judge survey. We announced our upcoming Web-based survey of immigration judges on May 25, 2007, and e-mailed a cover letter and link to the survey on May 30. The President of NAIJ encouraged immigration judges to participate. During the period from May 30 through July 29 (the final deadline for completing the survey), we e-mailed reminder notices five times to immigration judges who had not responded, encouraging them to participate. From July 12 through July 16, 2007, we followed up by telephone with the 65 immigration judges who had not completed the survey. We surveyed all 207 immigration judges who were on board as of September 30, 2006, and received 160 responses for a 77 percent response rate. Of the 160 respondents, 159 said that they had heard at least some asylum cases over the past year. The one immigration judge who had not adjudicated any asylum cases over the past year was directed to not complete the rest of the survey. In analyzing the three surveys, we computed descriptive statistics on the closed-ended survey responses and conducted a systematic content analysis on selected open-ended survey responses. (See app. I, app. II, and app. III, respectively, for aggregate responses to the asylum officer, supervisory asylum officer, and immigration judge survey). To analyze the content of responses of asylum officers and supervisors to particular open-ended questions, two staff members independently reviewed all the responses and identified preliminary response categories, and then mutually agreed on response categories. They subsequently reviewed responses, again, and independently placed them into appropriate categories. Any discrepancies were discussed and resolved, with a third team member being consulted when needed. Because these were not sample surveys, but rather a census of all the relevant groups, there are no sampling errors. However, the practical difficulties of conducting any surveys may introduce nonsampling errors. For example, differences in how a particular question was interpreted or the sources of information available to respondents can introduce unwanted variability into the survey results. We included steps in both the data-collection and data-analysis stages for the purposes of minimizing such nonsampling errors. As indicated above, social science survey specialists designed the draft questionnaires in close collaboration with GAO subject matter experts, and drafts were reviewed for accuracy by agency officials. Versions of the questionnaire were pretested with several members of each of the populations. Since this was a Web-based survey, respondents entered their answers directly into the electronic questionnaire, eliminating the need to key data into a database, minimizing error. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error. A second, independent analyst checked the accuracy of all computer analyses. To gain a better understanding of asylum adjudications, we visited three of the Asylum Division’s eight Asylum Offices—Los Angeles, San Francisco, and New York. The views we obtained at these three offices may not be generalized to all eight Asylum Offices. However, since we selected these offices based on their diversity in size (based on the number of asylum officers and cases adjudicated), percentage of cases granted asylum, and geographic location, they provided us with an overview and perspective of the asylum process as well as potential challenges facing asylum officers. At each of the three offices, we conducted semistructured interviews with the Asylum Office Director and Deputy Director, a Quality Assurance and Training Coordinator, and the Fraud Detection and National Security Immigration Officer; and at two of the three offices, we interviewed the Fraud Prevention Coordinator. In addition, we interviewed a total of 14 asylum officers and 5 supervisory asylum officers among the three offices. At one office, we asked the Director to identify 2 asylum officers and 1 supervisory asylum officer for us to interview. At the other two offices, we selected asylum officers and supervisors using a random sampling approach that we stratified based on experience levels. The composition of each office in terms of number and experience levels of officers and staff availability affected who we were able to interview. Between these two offices, we interviewed 12 asylum officers (5 officers with more than 8 years of experience who we categorized as “very experienced,” 4 officers with between 2 and 8 years of experience who we categorized as “experienced,” and 3 officers with less than 2 years of experience who we categorized as “less experienced”). At each of these 2 offices, we interviewed 1 “very experienced” and 1 “experienced” supervisory asylum officer. Between the 2 offices, we also observed a total of 14 interviews that asylum officers conducted with asylum applicants. In addition, we observed a local training session at each of these 2 offices. To obtain an additional perspective on factors that may affect asylum officers’ and immigration judges’ adjudication of asylum cases, we interviewed U.S. Immigration and Customs Enforcement (ICE) Assistant Chief Counsels (also known as ICE trial attorneys) associated with immigration courts in Los Angeles and San Francisco, California, and New York City, New York who were identified by an ICE Deputy Chief Counsel as having experience with asylum cases. As the government’s representative in removal proceedings, ICE trial attorneys see asylum cases that come before the immigration court. In Los Angeles and San Francisco, we met with a Deputy Chief Counsel and a total of three ICE trial attorneys; in New York City, we met with four ICE trial attorneys. To further our understanding of factors affecting immigration judges’ asylum adjudications, we visited the Los Angeles immigration court where we interviewed four immigration judges and observed court proceedings, which included initial and merit hearings on asylum cases. We conducted this performance audit from December 2005 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Department of Homeland Security’s (DHS) affirmative asylum process generally consists of several steps. The alien initiates the process by filing an asylum application at a U.S. Citizenship and Immigration Services (USCIS) Service Center, where the case is entered into the Refugees, Asylum, and Parole System and some background, identity, and security checks are automatically initiated. Service Center personnel send automatically generated notices to the applicants, requiring them to appear for biometrics collection, including fingerprints, at a USCIS Application Support Center prior to the asylum interview. Service Center personnel send the applicant’s file to the Asylum Office that has jurisdiction over the applicant’s place of residence. Within 21 days of filing, the Asylum Office sends an automatically generated interview notice to the applicant. The Asylum Office is allocated at least 15 days to complete the adjudication after conducting the interview. Generally, cases are randomly assigned to asylum officers who are required by law to conduct interviews with applicants within 45 days of the application filing date in the absence of exceptional circumstances. The purpose of the interview is to verify the applicant’s identity, establish the applicant’s alienage, evaluate the applicant’s credibility, and determine whether the applicant is eligible for asylum. Applicants are permitted, but not required, to bring their own attorney or accredited representative to the interview. If the applicant is not fluent in English, the applicant is required to bring an interpreter to the asylum interview. Within 4 days of the interview, the asylum officer is to prepare the written decision and submit it to a supervisory asylum officer. The supervisor is to review the decision to ensure the asylum officer’s decision is supported by law and that procedures are properly followed. The supervisor either signs or returns it to the asylum officer for correction. Applicants interviewed at Asylum Offices are required to return to the asylum office 2 weeks after the interview and within 60 days of filing the asylum application to receive the asylum decision. Within the Department of Justice (DOJ), the Executive Office for Immigration Review’s (EOIR) asylum process generally consists of the following steps. The applicant is to appear before an immigration judge for an initial hearing, during which the immigration judge is to, among other things, (a) ensure that the applicant understands the contents of the Notice to Appear, (b) provide the applicant information on available free of charge or low-cost legal representation in the area, and (c) schedule a subsequent date to hear the merits of the asylum claim. At that time, the immigration judge also hears the pleadings of the U.S. Immigration and Customs Enforcement (ICE) trial attorney and the applicant. Prior to the merits hearing, the ICE trial attorney, and the applicant or the applicant’s representative must submit applications, exhibits, motions, a witness list, and criminal history to the immigration court, if applicable. The ICE trial attorney and the applicant, or the applicant’s representative, may also submit prehearing briefs or statements for the immigration judge to review in advance of the hearing to narrow the legal issues. In some cases attorneys, or the immigration judge, may request a prehearing conference for reasons including narrowing the issues or exchanging information. A merits hearing is then held, during which the applicant (or the applicant’s representative) and an ICE trial attorney present the case before the immigration judge by generally making opening statements, presenting witnesses and evidence to the immigration judge, cross-examining, and making closing statements. The immigration judge may participate in the questioning of the applicant or other witnesses. At the end of the hearing, the immigration judge is to issue a decision that includes the facts that were found to be true, an accurate statement of the law, factors that were considered, and the weight that was given to the evidence presented (including the credibility of witnesses). If the applicant or ICE disagrees with the immigration judge’s decision, either party may appeal it to EOIR’s Board of Immigration Appeals (BIA) within 30 days. If the BIA ruling is adverse to the applicant, the applicant generally may file a petition for review in the U.S. Court of Appeals. During fiscal years 2002 through 2007, the Asylum Division received 391,763 new cases, of which 63 percent consisted of asylum receipts—that is, new or reopened asylum applications. As shown in figure 13, asylum officers’ annual caseload declined during this period, and completions declined from fiscal year 2005 through 2007. During this same period, authorized staffing for asylum officers ranged from a high of 332 officers in 2004 to a low of 291 officers in 2007, as shown in table 3, although not all asylum officers were considered available to conduct adjudications. The Asylum Division projects the number of “available asylum officers” by subtracting from authorized levels the number of staff designated to conduct overseas refugee adjudications and security screening and the number it projects will be unavailable due to activities such as training and leave. During fiscal years 2002 through 2007, the Asylum Division reported available asylum officers ranged from a high of 232 in 2003 to a low of 199 in 2005. It reported 163 officers would be available in 2008. The Asylum Division may detail asylum officers from one office to another to assist offices with high caseloads. Immigration court receipts—that is, newly filed and reopened cases— totaled about 1.9 million cases during fiscal years 2002 through 2007, of which 22 percent were asylum, and the rest were other types of immigration cases. From fiscal years 2002 through fiscal year 2005, immigration judges’ caseload, which includes receipts and all cases still pending from the prior years, increased annually, but began to decline in fiscal year 2006 (see fig. 14). Completions increased annually from fiscal years 2002 through 2006, and declined in 2007. According to the Executive Office for Immigration Review (EOIR), immigration caseload is expected to increase by a minimum of 25,000 additional cases by 2008 as a result of current and planned DHS initiatives, such as the addition of detention facilities and beds and enhanced anti-smuggling programs. As shown in table 4, from fiscal years 2002 through fiscal year 2007, the number of authorized immigration judges increased from 216 in fiscal year 2002 to 251 in fiscal year 2007, with the most significant increase occurring in fiscal year 2007. At the same time, the number of immigration judges who were on board remained fairly constant, except for an increase in fiscal year 2006. In addition to the contact named above, Evi Rezmovic, Assistant Director, and Lori Weiss, Analyst-in-Charge, managed this assignment. Tracey Cross, Foster Kerrison, Maria Mercado, Catherine Kim, Yvette Gutierrez-Thomas, Jason Campbell, and Alana Miller made significant contributions to the work. Michele Fejfar, Elizabeth Wood, and Carolyn Boyce assisted with design, methodology, and data analysis. Lara Kaskie and Richard Ascarate provided assistance in report preparation, Frances Cook provided legal support, Tom Jessor provided expertise on immigration court issues, and Karen Burke and Etana Finkler developed the report’s graphics.
Each year, tens of thousands of noncitizens apply in the United States for asylum, which provides refuge to those who have been persecuted or fear persecution. Asylum officers (AO) in the Department of Homeland Security's (DHS) U.S. Citizenship and Immigration Services (USCIS), and immigration judges (IJ) in the Department of Justice's (DOJ) Executive Office for Immigration Review (EOIR) assess applicants' credibility and eligibility. GAO was asked to evaluate aspects of the asylum system. This report addresses the extent to which quality assurance mechanisms have been designed to ensure adjudications' integrity, how key factors affect AOs' adjudications, and what key factors affect IJs' adjudications. To conduct this work, GAO reviewed agency documents, policies, and procedures; surveyed all AOs, supervisory AOs, and IJs; and visited three of the eight Asylum Offices. These offices varied in size and percentage of cases granted asylum. Results of these visits provided additional information but were not projectable. USCIS and EOIR have designed quality assurance mechanisms to help ensure the integrity of asylum adjudications, but some can be improved. While 75 percent of AO survey respondents reported that basic training prepared them at least moderately well to adjudicate cases, they also reported that despite weekly training, they needed additional training to help them detect fraud, conduct security checks, and assess the credibility of asylum seekers. The Asylum Division does not consistently solicit AOs' and supervisory AOs' input on a range of their training needs. Without this, the Asylum Division lacks key information for making training decisions. The Asylum Division has designed a quality review framework to ensure the quality and consistency of asylum decisions. Although supervisors review all cases and headquarters reviews certain cases, other local quality assurance reviews rarely took place in three of the eight Asylum Offices primarily due to competing priorities. By fully implementing its quality review framework, the Asylum Division would better identify deficiencies, examine their root causes, and take action. The majority of IJ survey respondents reported that training enhanced their ability to adjudicate asylum cases, although the majority also reported having additional training needs. EOIR expanded its training program in 2006, particularly for newly hired IJs, and annually solicits IJs' views on their training needs. Asylum officers reported challenges in identifying fraud and assessing applicants' credibility, as well as time constraints, as key factors affecting their adjudications. The majority of AO survey respondents reported it moderately or very difficult to identify various types of fraud, despite mechanisms designed to help identify fraud and assess credibility. Further, assistance from other federal entities to AOs in assessing the authenticity of asylum claims has been hindered in part by resource limitations and competing priorities. With respect to time constraints, 65 percent of AOs and 73 percent of supervisory AOs reported that AOs have insufficient time to thoroughly adjudicate cases--that is, in a manner consistent with procedures and training--while management's views were mixed. The Asylum Division set a productivity standard equating to 4 hours per case in 1999 without empirical data. Without empirical data on the time it takes to thoroughly adjudicate a case, the Asylum Division is not best positioned to know if its productivity standard reflects the time AOs need for thorough adjudications. Verifying fraud, assessing credibility, and time constraints are also key factors affecting IJs' adjudications. IJ survey respondents cited verifying fraud (88 percent) and assessing credibility (81 percent) as a moderately or very challenging aspect of asylum adjudications. Responding to 2006 Attorney General reforms, EOIR implemented a program to which IJs can refer instances of suspected fraud and receive information to aid in fraud detection. Eighty-two percent of IJs reported time limitations as moderately or very challenging aspects of their adjudications. EOIR has detailed IJs to courts with high caseloads and plans to hire additional staff, but it is too soon to know the extent to which additional staff will alleviate IJs' time challenges.
USPS, an independent establishment of the executive branch, is intended to be a financially self-sufficient entity that covers its expenses almost entirely through postal revenues. In April 2001, we placed USPS on our First, in the short term, USPS’s ability to high-risk list for two reasons.continue to fulfill its mission on a self-supporting basis was threatened because of projected annual losses of $2 billion to $3 billion, severe cash flow pressures, and debt approaching its statutory borrowing limit without any debt reduction plan. Second, in the long term, increasing retirement- related expenses threatened to reduce USPS’s future cash flows and place upward pressures on postal rates. We have been reporting on USPS’s financial challenges, including those related to funding its retiree health benefit liability, over the past decade. In May 2002, the Comptroller General testified that USPS had about $100 billion in liabilities, including an estimated $49 billion in unfunded retiree health benefit liability. Unlike pension liabilities, USPS had been funding its retiree health benefit liability on a pay-as-you-go basis—an approach in which USPS paid its share of premiums for existing retirees, with no prefunding for any future premiums expected to be paid on behalf of current retirees and workers. In May 2003, the Comptroller General testified that USPS’s accounting treatment—which reflected the pay-as- you-go nature of its funding—did not reflect the economic reality of its legal liability to pay for its retiree health benefits, and that current ratepayers were not paying for the full costs of the services they were receiving. Consequently, the pension benefits being earned by USPS employees—which were being prefunded—were recovered through current postal rates, but the retiree health benefits of those same employees were not being recognized in rates until after they retired. The Comptroller General testified that without a change, a sharp escalation in postal rates in future years would be necessary to fund the cost of retiree health benefits on a pay-as-you-go basis. Two laws, enacted in 2003 and 2006, reformed USPS’s pension liabilities and required it to prefund retiree health benefits: The Postal Civil Service Retirement System Funding Reform Act of changed USPS funding of its Civil Service Retirement System (CSRS) pension liabilities (based on “dynamic assumptions” ) while retroactively transferring responsibility for funding the cost of CSRS benefits attributable to the military service of postal employees from the U.S. Treasury to USPS; required USPS to escrow the reduction in annual CSRS payments resulting from the funding changes in the act (about $3 billion); and required USPS to report to Congress on how it could use the CSRS savings realized after fiscal year 2005. USPS proposed to Congress in 2003 that the responsibility for funding the cost of CSRS benefits attributable to the military service of postal employees be transferred back to the U.S. Treasury and that it use the resulting savings to prefund its retiree health benefit liability. PAEA, enacted in 2006,transferred all responsibility for costs related to CSRS military service credit from USPS back to the U.S. Treasury, both retroactively and prospectively; this included all CSRS military service costs for postal employees since the inception of the Postal Service in 1971; established the PSRHBF to begin prefunding the health benefits of current and future postal retirees and transferred about $20 billion of “start-up” funds into the PSRHBF ($3 billion from the discontinued CSRS escrow—as USPS’s annual CSRS payment was suspended—and $17 billion from the surplus in the CSRS fund); required USPS to make annual payments ranging from $5.4 billion to $5.8 billion per year into the PSRHBF from fiscal years 2007 through 2016 to begin prefunding its retiree health benefit liability; and required OPM to calculate the remaining unfunded liability in 2017 and each subsequent year, and to calculate an amortization payment based on an amortization period that extends to 2056 or, if later, 15 years from the then-current fiscal year. As a result, in 2007 USPS began prefunding its retiree health benefits as its CSRS pension liability was significantly reduced and its annual CSRS payment was suspended. USPS stated in its 2007 Annual Report that such prefunding was a farsighted and responsible action that placed USPS in the vanguard of both the public and private sectors in providing future security for its employees, and augured well for its long-term financial stability, but also acknowledged that the required payments Contrary would be a considerable financial challenge in the near term.to statements made by some employee groups and other stakeholders, PAEA did not require USPS to prefund 75 years of retiree health benefits over a 10-year period. Rather, pursuant to OPM’s methodology, such payments would be projected to fund the liability over a period in excess of 50 years, from 2007 through 2056 and beyond (with rolling 15-year amortization periods after 2041). However, the payments required by PAEA were significantly “frontloaded,” with the fixed payment amounts in the first 10 years exceeding what actuarially determined amounts would have been using a 50-year amortization schedule. We testified in April 2007 that we had removed USPS from our high-risk list due in part to USPS’s financial improvements resulting from these congressional actions. From fiscal years 2003 to 2005, USPS’s annual pension expense declined by $9 billion. USPS had repaid over $11 billion of outstanding debt, reported $5.4 billion in cost savings and record high net incomes, and delayed rate increases from fiscal year 2003 until January 2006. Since fiscal year 2007, however, USPS has experienced significant financial challenges. USPS’s gap between expenses and revenues has grown significantly, as shown in figure 1. In addition, USPS’s outstanding debt to the U.S. Treasury increased from $2.1 billion at fiscal year-end 2006 to its current statutory-borrowing limit of $15 billion. In fiscal year 2009, we returned USPS to our high-risk list due, in part, to a projected loss of $7 billion—and an actual loss of over $8.5 billion—in fiscal year 2010. For fiscal year 2012, USPS had a net loss of almost $16 billion, which included $11.1 billion for required PSRHBF prefunding payments Furthermore, USPS’s future financial outlook that USPS did not make.is bleak as it projects further declines in mail volume and revenue by fiscal year 2020. USPS projects that First-Class Mail—which is highly profitable and generated about 44 percent of USPS’s revenue in fiscal year 2012—will decline in volume by about 42 percent by fiscal year 2020, as shown in figure 2. During the economic downturn, there has been an accelerated diversion of business and individual mail to electronic alternatives, and some businesses have left the mail entirely. USPS further projects that an economic recovery will not bring a corresponding recovery in mail volume because of continuing social and technological trends that have changed the way that people communicate and use the mail. USPS has several initiatives to generate new revenue; however, such efforts are unlikely to generate enough revenue in time to offset the projected decline in mail volume.cost-saving initiatives to achieve financial stability. Limited increases in revenue require USPS to seek aggressive consolidating its mail processing and transportation networks; $5 billion in compensation and benefits; and $8.5 billion through legislative changes, such as moving to a 5-day delivery schedule. At the same time, USPS’s plan would also reduce the overall size of the postal workforce by roughly 155,000 career employees, with many of those reductions expected to result from attrition. USPS reports in the plan that half of its current career employees—283,000 employees—will be retirement eligible by 2016. In March 2010, USPS presented a detailed proposal to the Postal Regulatory Commission (PRC) to move from a 6- day to a 5-day delivery schedule to achieve its workforce-reduction and cost-savings goals. USPS projected that its proposal to move to 5-day delivery by ending Saturday delivery would save about $3 billion annually and would reduce mail volume by less than 1 percent. However, on the basis of its review, PRC estimated a lower annual net savings—about $1.7 billion after a 3-year phase-in period—as it noted that higher revenue losses were possible. In February 2012, USPS updated its projected net savings from 5-day delivery to $2.7 billion after a 3-year implementation period. As noted earlier, USPS has also proposed withdrawing from the Federal Employees Health Benefits Program (FEHBP) and administering its own health care plan for its employees and retirees. This report looks at retiree health benefit funding options assuming that USPS continues to participate in FEHBP under current provisions. Adoption of any of the funding approaches analyzed in this report would not by itself preclude USPS from continuing to pursue its proposal to administer its own plan. If USPS’s proposal was adopted and if it was expected to result in cost savings, these projected savings would be reflected in a lower liability, a lower unfunded liability, and lower prefunding contributions than otherwise. We will be issuing a separate report evaluating USPS’s proposal to administer its own health care plan. Related to whether USPS should prefund retiree health benefits, some stakeholders have argued that such prefunding is primarily responsible for USPS’s dismal financial condition and is unfair, arguing that no other entity is required to conduct such prefunding. According to a 2011 OPM Inspector General (OIG) report, however, postponing prefunding (deferring payments until later) is financially risky.General reported that future USPS customers (ratepayers) will have to pay for expenses that the USPS is incurring today and added that deferring payments will likely hurt the USPS’s ability to compete in the future and affect its ability to improve its financial situation. The report added that USPS would lose the benefit of the interest that its deposits into the funds would have otherwise earned. This interest would have reduced USPS’s future unfunded liabilities for these benefits. Consequently, postponing prefunding would require the USPS to make larger contributions in the future. At the end of fiscal year 2012, OPM estimated that USPS’s total retiree health benefit liability was almost $94 billion, whereas the PSRHBF balance was about $46 billion (49 percent), leaving USPS with an unfunded liability of about $48 billion. Approximately half of the $94 billion liability is for retired annuitants and their survivors while the other half is for current career employees. At fiscal year-end 2012, USPS had about 471,000 annuitants and survivors who were receiving retiree health benefit coverage and about 528,000 career-employees who could become eligible for such coverage when they retire.current employees is a portion of the ultimate liability for their future retiree benefits; the liability accrues steadily over their working years, from zero at date of entry into FEHBP to the full liability at retirement. Contrary to some claims, there is no liability held, nor contributions made, for any future employees who have yet to be hired or yet to be born. PSHRBF’s balance comes from three sources. USPS’s annual prefunding payments have accounted for $17.9 billion, or 39 percent, of the PSRHBF balance as of September 30, 2012.The remaining balance consists of about $20 billion transferred from USPS’s excess CSRS funds (referred to as “start-up funds” in figure 3 below) when the PSRHBF was created in 2007 and approximately $7.8 billion in earned interest (see figure 3). Because of USPS’s financial difficulties, however, USPS has not made all of its required prefunding payments. Under PAEA, USPS is still responsible for contributing an additional $33.9 billion to the PSRHBF by fiscal year 2017 as shown in table 1, including $11.1 billion that USPS has defaulted on over the past 2 years. Originally due at the end of fiscal year 2011, USPS’s $5.5 billion required retiree health prefunding payment was delayed until August 1, 2012. USPS missed that payment as well as the $5.6 billion that was due by September 30, 2012. While the PSRHBF balance covered about 49 percent of USPS’s retiree health benefit liability at fiscal year-end 2012, USPS’s deteriorating financial outlook will make it difficult under current requirements for USPS to continue prefunding the remaining unfunded liability in the short term, and possibly to continue funding the remaining unfunded liability over the next several decades, as required under PAEA. We considered current law (PAEA) requirements against five alternative approaches for funding the costs of retiree health benefits, each of which involves tradeoffs that could impact USPS’s short-term cash flow, its future financial condition, different generations of postal ratepayers, and over a million postal employees and retirees. We compared the current law prefunding requirements as well as approaches that have been proposed in (1) a bill passed by the House of Representative’s Committee on Oversight and Government Reform, (“House Bill”), (2) in the President’s fiscal year 2012 budget request (“Administration Approach”), . In addition, some and (3) in a bill passed by the Senate, (“Senate Bill”)postal stakeholders have argued that prefunding is unnecessary or inadvisable altogether, so we also examined the effects of implementing two variations on a “Pay-as-You-Go Approach.” We obtained data and projections from USPS and OPM, and built on this information by performing additional calculations and projections. Methodology and assumptions are presented in more detail in appendix I. addition to making its share of premium payments for existing retirees and beneficiaries, which OPM has estimated will rise from about $2.5 billion to about $3.8 billion per year between fiscal year 2011 and fiscal year 2016. Beginning in fiscal year 2017, the current law switches to an “actuarial approach” for the remaining funding, under which USPS’s share of premium payments for existing retirees and beneficiaries is paid from the PSRHBF rather than by USPS, and USPS makes annual payments to the PSRHBF consisting of two components: 1. the actuarially determined cost of future benefits attributable to employee service during the fiscal year (known as the annual “normal cost”), and 2. the actuarially determined amount that, as calculated by OPM, would be projected to fully fund the remaining unfunded liability over an amortization period ending in the later of fiscal year 2056 or 15 years subsequent to the then-current fiscal year. Current law requires OPM to base its actuarial calculations of prefunding requirements on the actuarial assumptions used by OPM for its financial reporting. We will discuss the relevance of this current law assumption basis later in this report. As discussed earlier, USPS did not make the required 2011 and 2012 payments to the PSRHBF, totaling $11.1 billion. We modeled a modified version of current law assuming that these missed payments are eliminated by legislation and that the current law payment schedule resumes with the payment of $5.6 billion due at the end of fiscal year 2013. We refer to this schedule as “Modified Current Law Approach” in our presentation of results. The three alternative prefunding approaches we examined differ from current law in the following respects. The House Bill (H.R. 2309) reduces the fixed payment due at the end of fiscal year 2011 from $5.5 billion to $1.0 billion, making up the difference in higher fixed payments in fiscal year 2015 and fiscal year 2016. Starting in fiscal year 2017, the House Bill’s actuarial approach for determining prefunding is the same as current law. As with our modeling of current law, because the 2011 and 2012 payments have already been missed, we modeled a modified version of the House Bill in which the House Bill’s 2011 and 2012 payments are eliminated by legislation and the House Bill’s payment schedule commences in fiscal year 2013. We refer to this schedule as “Modified House Approach” in our presentation of results. The Administration Approach restructures and generally reducesrequired fixed prefunding payments in each fiscal year from 2011 through 2016. It also calls for USPS’s share of premium payments for existing retirees and beneficiaries to begin to be paid from the PSRHBF right away, rather than beginning in fiscal year 2017. As a result, total USPS payments prior to fiscal year 2017 (prefunding plus any required payment of premiums) are significantly lower under the Administration Approach than under current law or the House Bill (and, consequently, would be somewhat greater after fiscal year 2017 to make up for this). Starting in fiscal year 2017, the Administration’s actuarial approach for determining prefunding is the same as current law and the House bill. As with current law and the House Bill, we modeled a “Modified Administration Approach” that eliminates its 2011 and 2012 payments by legislation and commences payment in 2013. The Senate Bill (S. 1789) differs from current law, the House Bill, and the Administration Approach in three key aspects. First, the Senate Bill eliminates the fixed prefunding payments and begins an actuarial approach to prefunding right away (which we modeled to begin at the start of fiscal year 2013). Second, the Senate Bill uses a target of funding 80 percent of the liability, instead of the 100 percent funding targeted by current law and the other approaches. Third, the Senate Bill directs OPM to use actuarial assumptions consistent with those used by OPM to determine funding for USPS’s share of liabilities in the federal civilian pension programs. These pension-funding assumptions are selected by OPM, with advice from an independent Board of Actuaries. As discussed further in the next section and later in this report, this assumption basis specified in the Senate Bill differs from the assumption basis specified in current law and retained in the House Bill and Administration proposal. We refer to the Senate Bill provisions as “Modified Senate Approach” in our presentation of results. Key features of current law and these alternative approaches are summarized in table 2. In addition, we also modeled two variations on a Pay-as-You-Go Approach, discussed in a subsequent section of this report. Our analysis shows that, over the short-term period ending in fiscal year 2020, the Modified Current Law and House Approaches would decrease USPS’s unfunded liability for retiree health benefits, while the Modified Administration and Senate Approaches would increase the unfunded liability. This is mainly the result of significantly higher contributions under the Modified Current Law and House Approaches in fiscal years 2013 through 2016. Over the longer term through 2040, there are significant differences in the projected unfunded liability among the various approaches. The Modified Current Law, House, and Administration Approaches are projected to eliminate most of the unfunded liability over that period; the Modified Senate Approach is projected to leave a larger portion of the liability still unfunded because of its lower funding target, while the two Pay-as-You-Go Approaches we examined would lead to very large unfunded liabilities. It should be understood that projections of this type, especially longer term projections, contain a significant degree of uncertainty. Nonetheless, given the magnitude of the retiree health benefit liabilities and the importance of being able to pay for these benefits, reasonable projections of the associated costs and liabilities provide essential information for enabling responsible stewardship of USPS resources. Our comparison of the four prefunding approaches was complicated by the fact that the Senate Bill calls for selecting assumptions based on different criteria than current law, the House Bill, and the Administration Approach. Assumptions represent estimates of future economic and demographic trends, and while initial assumptions may differ, only one scenario can actually occur, and assumptions generally change over time to reflect emerging experience. Accordingly, to compare the four prefunding approaches, we modeled them under uniform assumptions— first using the current law assumption basis, presented in this section, and then using the Senate bill assumption basis, presented in appendix II. We also discuss the underlying differences between these two assumption bases, and present some comparative results, in the section below on “Sensitivity to Assumptions.” Our overall findings were not materially affected by the choice between these two assumption bases. For a short-term outlook, we projected USPS’s required payments (prefunding contributions as well as premium payments for current retirees, when applicable) and the amount of unfunded liability in fiscal years 2013 through 2020.under the Modified Current Law and House Approaches would be significantly greater than under the Modified Administration and Senate Approaches. For example, estimated total payments over this period under the Modified Current Law Approach would be 48 percent greater than under the Modified Senate Approach. In particular, payments over the 8 years would total about $58 billion under Modified Current Law and $61 billion under the Modified House Approach, versus $44 billion under the Modified Administration Approach and $39 billion under the Modified Senate Approach. Higher payments mean a lower unfunded liability at the end of the period, and vice versa. Thus, at the end of fiscal year 2020, the Modified Current Law and House Approaches are projected to result in unfunded liabilities of $39 billion and $35 billion, respectively, whereas the Modified Administration and Senate Approaches are projected to result in unfunded liabilities of $59 billion and $64 billion respectively. Thus, in the short term through fiscal year 2020, the unfunded liability is projected to decrease under the Modified Current Law and House Approaches and increase under the Modified Administration and Senate Approaches (see table 3). We extended our projection of USPS’s required payments and the amount of unfunded liability to fiscal year 2040. While the uncertainty of a projection increases with the length of the projection period, a longer projection period allows potential longer-term implications of different approaches to emerge—effects that might not be observable under a short-term projection. Since dollar amounts in fiscal year 2040 are not fully comparable to dollar amounts today, it is helpful to “normalize” such long-term projections to make the results more comparable across time periods. In table 4 we show projected payments and unfunded liability in fiscal year 2040 in three different ways: (1) as nominal (unadjusted) dollar amounts; (2) in constant (inflation-adjusted) 2012 dollars; and (3) as a percentage of USPS’s projected 2040 modified employee compensation costs (which for convenience we refer to as “compensation”). We also show the projected funded percentage—or the ratio of PSRHBF assets to USPS’s liability for retiree health benefits. Showing payments and unfunded liability amounts as a percentage of compensation provides a sense of the size of USPS’s retiree health care costs relative to the size of USPS’s operations. Projecting compensation does require an additional assumption regarding compensation growth and therefore introduces additional uncertainty into the projection. Nonetheless, increases over time in projected payments and unfunded liabilities as a percentage of compensation can be indicative of a likely greater strain on USPS’s resources. For example, unfunded liability as a percentage of compensation will rise to the extent that USPS is operating with a reduced workforce. As seen in table 4, in comparing the projections under Modified Current Law and the three alternative modified approaches, the differences in the projected payment required in fiscal year 2040 are not large, with the dollar amount of the projected payment ranging from $11.5 billion to $12.9 billion across the four approaches ($5.9 billion to $6.7 billion in constant dollars). The more significant differences in the annual payments, across the four approaches, occur in the short-term period covering fiscal years 2013 through 2016. There are, however, significant differences in the projected unfunded liability in fiscal year 2040. The Modified Senate Approach results in a projected unfunded liability of about $67 billion; this compares to unfunded liabilities of $22 billion under the Modified Administration Approach, $9 billion under Modified Current Law, and $7 billion under the Modified House Approach. This projected unfunded liability under the Modified Senate Approach amounts to $34 billion in constant dollars and 83 percent of projected annual compensation. For the Modified Administration Approach, the projected unfunded liability is $11 billion in constant dollars and 28 percent of projected compensation. The corresponding results under the Modified Current Law and House Approaches are significantly smaller. For example, as a percentage of projected compensation, the projected unfunded liability is 12 percent and 8 percent under the Modified Current Law and House Approaches, respectively. A primary reason for these differences is that the Senate Approach uses a target funded percentage of 80 percent, whereas the other three approaches use a target of 100 percent. By fiscal year 2040, the funded percentage is projected to have reached 73 percent under the Modified Senate Approach, versus 91 percent under the Modified Administration Approach, 96 percent under Modified Current Law, and 97 percent under the Modified House Approach. It is important to note that reaching a 100 percent funded percentage— that is, the unfunded liability is fully paid off and PSRHBF assets equal the liability—would not mean that USPS would have no further prefunding payments to make. USPS would continue to have to pay the “normal cost” each year into the fund, reduced by amortization of any surplus that might develop if experience is more favorable than assumed, or increased if less favorable such that the funded percentage falls back under 100 percent. As mentioned earlier, the normal cost is the actuarially determined cost of future benefits attributable to employee service during the fiscal year, a cost that increases the liability each year. Failure to continue to make such contributions into the fund each year would mean a failure to pay for the cost of then-current employee service; the likely result would be PSRHBF assets again falling short of the liability, thereby creating a new unfunded liability. Under any of the approaches modeled, by fiscal year 2040, roughly 80 to 90 percent of USPS’s required payment would consist of this normal cost. Combining the short-term and long-term projection results, figure 5 illustrates projected annual payments, as a percentage of projected compensation, for each fiscal year from 2013 through 2040. The largest differences among the four approaches occur from fiscal year 2013 through fiscal year 2016. Under the Modified House Approach, estimated required payments are in excess of 20 percent of compensation in all 4 of these years, climbing to 28 and 29 percent of compensation in fiscal year 2015 and fiscal year 2016. Under Modified Current Law, estimated required payments are also in excess of 20 percent of compensation in each of these years, peaking at 23 percent of compensation in fiscal year 2016. In contrast, under the Modified Administration Approach, estimated required payments start at just 3 percent of compensation in fiscal year 2013 before climbing to 13 to 14 percent of compensation in the ensuing three years. Under the Modified Senate Approach, estimated required payments round to a steady 11 percent of compensation in each of these first 4 years. From fiscal year 2017 through fiscal year 2040, the estimated required contributions are closer together across the four approaches, ranging from 11 to 17 percent of compensation. The projected payments are somewhat higher under the Modified Administration Approach than under the Modified Current Law or House Approaches, in order to make up for the differences in payments from fiscal year 2013 through fiscal year 2016. The projected payments under the Modified Senate Approach follow a slightly different trajectory because of the approach’s 80 percent funding target, but fall within the same range. Figure 6 illustrates the projected unfunded liability, as a percentage of projected USPS annual compensation costs, as of the end of each fiscal year from 2012 through 2040. For each approach modeled, the projection starts at the estimated unfunded liability of 108 percent of compensation as of the end of fiscal year 2012. In the short term, the unfunded liability as a percentage of compensation trends down under the Modified House Approach and Modified Current Law Approach because of their relatively high required early payments, while the unfunded liability as a percentage of compensation trends upward under the Modified Administration Approach and, for a somewhat longer period, under the Modified Senate Approach. In the longer term, the unfunded liability as a percentage of compensation trends down under all four approaches. By the end of the projection period in fiscal year 2040, the vast majority of the unfunded liability, measured as a percentage of compensation, is projected to be eliminated under the Modified House and Modified Current Law Approaches, while a smaller majority of it is projected to be eliminated under the Modified Administration Approach. A larger unfunded liability as a percentage of compensation is projected to be retained under the Modified Senate Approach because of its 80 percent funding target. Figure 7 illustrates the funding gap by another measure, the funded ratio—that is, the percentage of the liability that is covered by PSRHBF assets. This figure illustrates how a divergence emerges from fiscal year 2013 to fiscal year 2016 between the Modified House and Current Law Approaches on the one hand, and the Modified Administration and Senate Approaches on the other hand. By the end of the projection period, the funded ratio is projected to be just short of the 100 percent target under the Modified House and Modified Current Law Approaches, slightly further away from 100 percent under the modified Administration Approach, and, under the Modified Senate Approach, approaching its lower 80 percent funded ratio target. The annual prefunding payments that have been made since prefunding commenced in 2007—and that would continue to be made under any of the four prefunding approaches examined here—can be broken down into two components: a portion to pay for the cost of future benefits attributable to the current year of employees’ service (the “normal cost”), and the remainder, which pays down part of the unfunded liability. One of the rationales for prefunding is to pay for benefits as they are earned— during the working years—rather than later after the workers have retired and are no longer generating revenue for the enterprise. Further, this serves the purpose of assigning full costs of current employee compensation to current ratepayers, rather than to future ratepayers. A complicating factor is what might be called the “legacy” unfunded liability, i.e., the existing unfunded liability that conceptually should have been paid by ratepayers in prior years but was not. There is no obvious answer as to who should be responsible for the legacy unfunded liability, which ultimately comes down to a policy decision. The approach in PAEA spreads the cost of USPS’s legacy unfunded liability over 50-plus years of then-future postal ratepayers. To illustrate the portion of prefunding requirements that are attributable to legacy costs, we found that across the four different prefunding approaches that we examined, legacy costs would account for anywhere from 39 percent to 53 percent of the prefunding requirement in fiscal year 2017, tapering down to anywhere from 8 percent to 18 percent by fiscal year 2040. Measurements of actuarial costs and liabilities, as well as projections of such measures into the future, are subject to inherent uncertainty, and depend on a combination of economic and demographic assumptions as to future experience. Current law requires OPM to determine the value of USPS’s retiree health benefit liability based on actuarial assumptions that are consistent with those used by OPM for its financial reporting of liabilities for federal employee benefits. These assumptions are to be used to determine USPS’s funding requirements beginning in fiscal year 2017, when current law switches from a fixed-payment prefunding requirement to actuarially determined prefunding requirements. When the current law was enacted, this approach to selecting actuarial assumptions was consistent with the approach used by OPM for determining funding requirements for USPS’s participation in the CSRS and FERS pension programs. In 2008, the Federal Accounting Standards Advisory Board (FASAB), which promulgates financial reporting standards for the federal government, issued Statement of Federal Financial Accounting Standards No. 33 (SFFAS 33), which, beginning in 2010, specified particular, new methodologies for the selection of economic assumptions for valuing various post-employment benefits for financial reporting purposes. As a result, the assumptions used by OPM for financial reporting for federal employee benefits—and by extension under the current law, for determining USPS’s future prefunding requirements for retiree health care benefits—became different from the assumptions used by OPM to determine USPS’s funding requirements for CSRS and FERS. The House and Administration Approaches retain the current law assumption basis for determining USPS’s prefunding requirements; the Senate Approach would switch the determination of USPS’s prefunding requirements to assumptions consistent with those now used for USPS’s funding requirements for CSRS and FERS. The particular assumptions that differ are with respect to the interest rate (also known as the discount rate), the general inflation assumption, and the medical inflation (also known as the “trend”) assumption. Table 5 shows the differences in these assumptions for the September 30, 2011, actuarial valuations performed by OPM, which served as the basis for our projections. Under SFFAS 33, the discount rate assumption should reflect average historical interest rates, over the prior 5 years or longer, on marketable Treasury securities with maturities consistent with the cash flows being discounted. The number of historical rates used in the calculation of this historical average should be consistent from year to year. OPM uses a 10-year historical averaging period. Further, the discount rate, the inflation assumption, and other economic assumptions should be consistent with one another.assumption under SFFAS 33 using the same 10-year historical averaging period that it uses in determining the discount rate. The selection of assumptions is also guided by relevant Actuarial Standards of Practice, which are promulgated by the Actuarial Standards Board. OPM determines the general inflation In contrast to the current law assumption basis that is now tied to a historical averaging period, the assumptions for determining USPS’s funding requirements for CSRS and FERS represent OPM’s estimate of future, long-term experience, informed by advice from an independent Board of Actuaries, and similarly guided by relevant Actuarial Standards of Practice. These standards too require that economic assumptions be consistent with one another. The relationship between the two assumption bases illustrated in table 5 is not static, so that the gap between the two assumption bases, and even which assumption base has higher rates, could change over time. It is important to note that assumptions that are tied to historical averages— as is the case under the current law assumption basis since the promulgation of SFFAS 33—can potentially diverge significantly from either current economic circumstances or from the current long-term economic outlook. The assumption criteria in SFFAS 33 were designed to accomplish financial reporting objectives rather than funding objectives. In selecting the medical inflation assumption, OPM relies on a model developed by the Society of Actuaries. This model ties medical inflation to the general inflation assumption (among other factors), so that a higher expected general inflation rate implies higher expected medical inflation. As mentioned earlier, we modeled all four modified prefunding approaches in two ways: first, as if they all used the current law assumption basis, and second, as if they all used the Senate bill assumption basis. Our findings and conclusions are not materially different under the two different assumption bases. Figure 8 compares the dollar amount of estimated USPS payments, in each fiscal year from 2013 through 2020, under the Modified Senate Approach to prefunding, using both the current law assumption basis and the Senate bill assumption basis. These dollar payment amounts differ by just 2 percent in aggregate over the period, and by not more than 4 percent in any particular year. Figure 9 compares estimated USPS payments under the Modified Senate Prefunding Approach, over the entire projection period from fiscal years 2013 through 2040, as a percentage of projected USPS annual compensation costs, again under both the current law assumption basis and the Senate bill assumption basis. The difference in the average payment percentage is just 0.7 percentage point, with the difference never exceeding 1 percentage point in any year. Because of the closeness of the results using the two assumption bases, we have chosen to present numerical results across all four prefunding approaches using the current law assumption basis in the main body of this report. Appendix II contains comparable numerical results using the Senate bill assumption basis. The primary reason for similar results under the two assumption bases is that the effects of differences in particular assumptions are offsetting to a certain extent. For example, the discount rate of 5.75 percent under the Senate bill assumption basis is more optimistic than the discount rate of 4.90 percent under the current law assumption basis. However, a higher discount rate suggests higher inflation and medical inflation; the higher medical inflation offsets much of the benefit of the higher discount rate. The two different inflation assumptions were also incorporated into our projection of USPS’s annual compensation costs, which we extended based on a 10-year forecast of workforce and compensation provided to us by USPS. More information on these data and projections is provided in appendix I. We did not otherwise analyze variations in the workforce and compensation assumptions, as a more extensive analysis of assumption variations was beyond the scope of our study. As USPS notes correctly in its fiscal year 2011 Form 10-K report, “Because calculation of this liability involves several areas of judgment, estimates of the liability could vary significantly depending on the assumptions used.” In comparing the effects of the current law assumption basis versus the Senate bill assumption basis, we noted that differences in the discount rate and medical inflation assumptions have offsetting effects, so that the aggregate difference between the two assumption bases is not large. If, however, one of the assumptions were to change without an offsetting change in another assumption, the impact would be larger. OPM provided information on the sensitivity of the liability to variation in the medical trend alone, holding other assumptions constant. OPM’s most recent measure of USPS’s liability for retiree health benefits would have been 16 percent higher if the medical trend assumption had been one percentage point higher in all years (i.e., in table 5 above, 6.5 percent instead of 5.5 percent in the first year, etc.), and would have been 13 percent lower if the medical trend assumption had been one percentage point lower. Moreover, because the unfunded liability is equal to the difference between the liability itself and the amount of assets, a given percentage change in the liability can produce a larger percentage change in the unfunded liability. If the 2011 liability had been 16 percent higher, the unfunded liability would have been 31 percent higher; if the liability had been 13 percent lower, the unfunded liability would have been 26 percent lower. Thus, the $46 billion unfunded liability as of September 30, 2011, varies from $34 billion to $60 billion over this range of alternative assumptions. See table 6 below. Arguments have been made that requiring USPS to prefund its retiree health care benefits is unnecessary, unfair, or inadvisable, so we also examined the effects of a Pay-as-You-Go Approach. Under pay-as-you- go funding, each year USPS would only pay its share of premium payments for then-existing retirees and beneficiaries—there would be no prefunding. Given that money has already been prefunded in the PSRHBF, we first modeled a pay-as-you-go funding approach in which the fund would be drawn upon to pay USPS’s share of premium payments for as long as possible. Under this approach, no additional contributions would be made to the fund, the fund would grow with interest, and USPS’s share of premium payments for retirees and beneficiaries would be paid out of the fund until the fund was exhausted. Once the fund was exhausted, USPS would pay these premiums directly as they became due. Our projections show that, under either of the two sets of assumptionsthe current law and Senate bill assumption bases—the PSRHBF would become exhausted in 14 years, in 2026. USPS would have zero reported costs for retiree health benefits until then. Beginning in 2026, USPS would begin paying its share of premium payments. By 2040, under the current law assumption basis, this annual cost is projected to be about $13 billion, not much different than the annual prefunding cost in fiscal year 2040 under the four different prefunding approaches. The big — difference would be in the unfunded liability. Under this Pay-as-You-Go Approach, the unfunded liability in fiscal year 2040 would be about $250 billion, which would be about $130 billion in 2012 dollars, and about 310 percent of USPS’s projected annual compensation cost. By comparison, under the modified Senate prefunding approach, which produces the largest unfunded liability of the four prefunding approaches, the unfunded liability in fiscal year 2040 would be about 85 percent of projected annual compensation cost. In summary, once the trust fund became exhausted, annual pay-as-you- go payments would not become significantly more onerous than annual prefunding payments, at least through the end of our projection period in fiscal year 2040. However, the Pay-as-You-Go Approach would produce a vastly bigger unfunded liability—which could eventually require an escalation of postal rates or reduction in costs. We examined a second variation of pay-as-you-go funding, an approach that the USPS OIG analyzed and reported on in February 2012. Under this approach, USPS would stop making prefunding payments and would pay its share of premium payments for retirees and beneficiaries as they become due. The existing fund would be left to grow with interest, with no other cash inflow or outflow. The intention would be for this to continue only until USPS’s liability was fully funded. The USPS OIG has informally referred to this approach as the “Seal and Grow” Approach. The USPS OIG estimated that the fund would grow from $44 billion (its September 30, 2011, level) to $90 billion in 21 years. The USPS OIG did not estimate the liability or unfunded liability in 21 years, but noted that while the liability is not a static amount, and has risen over time historically, it had not changed significantly over the prior 3 years, going from $87 billion at fiscal year-end 2009 to $91 billion at fiscal year-end 2010 to $90 billion at fiscal year-end 2011. Some have concluded from this analysis that USPS’s unfunded liability of $46 billion would be eliminated in 21 years by adopting this approach. However, our projections of the unfunded liability, which incorporate OPM’s projections of the liability itself, show that the liability, in fact, would increase, resulting in a significant increase in the unfunded liability rather than its elimination. Specifically, we project that the unfunded liability would grow from $46 billion at fiscal year-end 2011 to $86 billion at fiscal year-end 2032 under this approach. The $86 billion estimate is equal to $53 billion in 2012 dollars and 139 percent of fiscal year 2032 compensation (up from 96 percent for fiscal year 2011). The USPS OIG’s projection of assets – from $44 billion to $90 billion over 21 years – represents a 3.5 percent annual return over this period. Under our projection, using the current law assumption basis, assets grow from $44 billion to $120 billion over this period, at the assumed return of 4.9 percent, but the liability grows from $90 billion to $206 billion, or at an average rate of 4.0 percent per year. This projected liability growth reflects the net effect of accretions for interest, accretions for normal cost (with a reduced workforce), and reductions as premium payments are made, thereby discharging a portion of the liability. The projected liability would have to be 42 percent lower than projected for the unfunded liability to disappear by 2032. For this to occur (in the absence of cuts to benefits), future experience would have to be much more favorable than predicted by the assumptions. Nonetheless, under this Seal and Grow Approach the funded percentage is projected to improve over time. Because premium payments are projected to exceed normal cost for most of the projection period, the liability is projected to grow at a slower rate than assets (as noted in the preceding paragraph). As a result, the liability is projected to be 70 percent funded by 2040, close to the 73 percent projected funded percentage under the Modified Senate Approach.Modified Senate Approach, the Seal and Grow Approach is projected to result in a significant improvement in the funded percentage over the projection period, while still leaving a substantially larger unfunded liability relative to the Modified Current Law, House, and Administration Approaches. Moreover, USPS’s payments under the Seal and Grow Approach would be more backloaded than under the Modified Senate Approach—with lower payments in the short term and higher payments later—making it more affordable in the short term but resulting in higher estimated unfunded liabilities in the short term as well. To assist Congress in considering the various funding approaches, we identified some factors to consider in assessing what would constitute reasonable short-term and long-term funding requirements. We also examined the prefunding requirements of other organizations that offer retiree health benefits to their employees. Given that USPS is intended to be a self-sustaining entity funded almost entirely by postal revenue, we have previously stated that USPS should prefund its retiree health benefit liability to the maximum extent that its finances permit. The following considerations should be taken into account when assessing the various funding approaches for USPS. Consideration of whether to prefund retiree health benefits includes the associated consequences of the potential inability to fund the remaining unfunded liability or keep up with annual premium payments. In general, rationales for prefunding post-retirement benefits for any enterprise, whether for pension benefits or retiree medical benefits, can include the following: Achieving an equitable allocation of cost over time by paying for retirement benefits during the employees’ working years, when such benefits are earned. For USPS, the relevant cost allocation is between current and future postal ratepayers. The rationale is to have current ratepayers pay for the full cost of compensation for current employees, including the portion of such current compensation that is not paid until these current employees are retired. However, as noted earlier, an additional consideration is the “legacy” unfunded liability that was not paid by ratepayers in prior years. The conceptual rationale for prefunding does not answer the question of who should be responsible for a legacy unfunded liability. Protecting the future viability of an enterprise by not saddling it with bills later after employees have already retired. In the case of USPS, this consideration is complicated by the organization’s financial condition. Providing greater benefit security to employees, retired employees, and their beneficiaries. Funded benefits protect against an inability to make payments later on, and can also make the promised benefits less vulnerable to cuts. In the private sector, failure to prefund retiree health benefits may have contributed to private employers terminating or reducing such benefits. In the state and local government sector, large unfunded liabilities for both retiree health and pension benefits have led to pressure and actions to trim the levels of these benefits. Others have contended that the mere requirement to account for the cost of these benefits in employers’ financial reporting has led to benefits being cut. While an analysis of the cause of retiree health benefit cuts in other sectors is beyond the scope of our research, failure to prefund these benefits is a potential benefit security concern. Providing security to any other party that might become responsible for part of the liability in the event of an enterprise’s inability to pay for the remainder of the unfunded liability. For example, the Pension Benefit Guaranty Corporation is responsible for backing up private sector pension benefits when companies are unable to do so. According to the OPM OIG, the consequences if USPS could not pay for its retiree health benefits are unclear. The effect of trade-offs among the different approaches on a number of issues would need to be considered, including trade-offs affecting: USPS’s financial condition. Protecting the future viability of USPS by not overwhelming it with bills and unfunded liabilities for the cost of employee benefits after these employees have already retired is complicated by the organization’s immediate cash flow challenges including having reached the maximum of its borrowing authority. Prefunding payments under current law have contributed about $21 billion toward USPS’s $25 billion of net losses over the past 5 years. If USPS continues to experience operational losses even before factoring in prefunding requirements, prefunding would add to such losses. As such, USPS would need to find larger cuts in operational costs now in order to have the cash to make its short-term prefunding payments. On the other hand, to the extent short-term prefunding payments are postponed, greater payments would be required later, supported by a smaller base of mail volume, with price caps further limiting revenue. Such a scenario would produce even greater pressure for cuts in operational costs later as well as raise concerns about USPS’s ability to make prefunding payments, when unfunded liabilities would be greater because of the deferral of prefunding payments. USPS’s OIG has stated that as an alternative to additional prefunding, USPS’s extensive real estate holdings could provide collateral for the remaining unfunded liability. However, USPS has stated that it does not believe that USPS-occupied real estate would be a suitable asset within the PSRHBF because employer-occupied real estate cannot be readily sold to provide cash when needed to pay benefits. In addition, we would note that in the event of USPS’s being unable to fund its liabilities, USPS might have other debts and obligations in addition to unfunded retiree health care liabilities for which any available real estate would be needed. Some comprehensive proposals to address USPS’s financial condition have included provisions to transfer USPS’s FERS pension surplus from the Civil Service Retirement and Disability Fund (CSRDF) to USPS; such a transfer could be viewed as a short-term source for some of the required PSRHBF prefunding payments. However, the most recent estimate of this surplus is significantly lower than the two prior estimates. We have previously reported on options and considerations with regard to this surplus. Use of any FERS surplus would not be a long-term solution to address USPS’s financial outlook and operational imbalances. Size of the annual payment and the unfunded liability. More near- term funding reduces payments and the amount of the unfunded liability later, while less near-term funding produces larger unfunded liabilities and requires higher funding payments later. The unfunded liability can also be viewed in a larger context. From fiscal years 2007 through 2010, USPS contributed a total of $17.9 billion to the PSRHBF. Over this same period, USPS increased its debt to the U.S. Treasury from $2.1 billion at fiscal year-end 2006 to $12.0 billion at fiscal year-end 2010, an increase of $9.9 billion. Thus, from fiscal year-end 2006 to 2010, USPS made payments to the PSRHBF of $17.9 billion while borrowing an additional $9.9 billion from U.S. Treasury. Allocation of costs between current and future postal ratepayers. More near-term funding assigns more cost to current postal ratepayers that is reflected in rates, while less near-term funding assigns more cost to future ratepayers. As noted above, a complicating factor is the existing unfunded liability, which conceptually should have been paid by prior ratepayers but was not. Instead, this legacy cost is being spread among current and future ratepayers since fiscal year 2007. Allocation of risks. Less prefunding now increases the risk that later some party(ies) could be called upon to pick up a greater share of the costs if USPS could not make its payments or pay off its unfunded liability. Another risk is that the level of employee pay and benefits may not be sustainable and could be reduced. As stated earlier, OPM’s OIG reported that the exact consequences of these risks are unclear.consistent allocation of costs for pay and benefits earned during employees’ work years could provide greater benefit security to employees, retirees, and beneficiaries. Another consideration with regard to the timing of prefunding payments is whether Congress wishes to continue requiring fixed prefunding contributions that are significantly in excess, through 2016, of actuarially- determined amounts. The House Bill largely retains this Current Law Approach, while the Senate Bill and the Administration Approach would produce a more consistent funding pattern. The Senate Bill targets an 80 percent funding level while the other approaches target a 100 percent funding level. The Senate committee report accompanying the Senate Bill stated that the committee set an 80 percent target-funding level on the presumption that USPS, if necessary, had additional assets it could draw upon to meet its liabilities. As previously stated, USPS’s OIG report stated that USPS’s extensive real estate holdings could provide collateral for the remaining unfunded liability, but we would note that in the event of USPS’s being unable to fund its liabilities, USPS might have other debts and obligations in addition to an unfunded retiree health benefit liability for which any available real estate would be needed. If an 80 percent funding target level were selected because of concerns about USPS’s ability to achieve a 100 percent target level within a particular time frame, an additional option could be to build in a schedule to achieve 100 percent funding in a subsequent time period after the 80 percent level is achieved. As discussed earlier, the issuance of SFFAS 33 had the effect of creating a divergence between the actuarial assumptions used in determining USPS’s funding requirements for PSRHBF and those used in determining its funding requirements for CSRS and FERS. Another consideration is whether Congress desires more uniform funding assumptions across these programs. As noted, the funding assumptions for PSRHBF under current law, which are retained in the House Bill and Administration Approach, are now, post-SFFAS 33, based on 10-year historical averages. Assumptions that are based on historical averages can potentially diverge significantly from either current economic circumstances or from the current long-term economic outlook. The assumption criteria in SFFAS 33 were designed to accomplish financial reporting objectives rather than funding objectives. We also reviewed the prefunding requirements for other organizations that offer retiree health benefits to their employees: private sector entities, state and local governments, and other federal entities. Although other federal, state and local, and private sector entities generally are not required to prefund retiree health care benefits, a few do prefund at limited percentages of their total liability. However, most are required to recognize the future costs of these benefits in their financial reporting if they follow generally accepted accounting principles. Although recognizing the cost of retiree health benefits for financial reporting purposes is a separate issue from the question of whether to prefund these benefits, such reporting does enhance the transparency of the cost of these benefits. USPS accounts for these benefits using private-sector multiemployer accounting rules, under which USPS does not recognize the unfunded liability for these benefits on its balance sheet. In 2002, GAO suggested that USPS reconsider its method of accounting for these benefits. In addition, although prefunding is not required, a number of private, state, local, and federal entities have elected to prefund some percentage of their retiree health benefits. For example, Standard & Poor’s (S&P) reported that 126 of the 296 companies in the S&P 500 that offered “other post-employment benefits” (OPEB) prefunded some percentage of the associated liabilities, while the USPS OIG has reported that 38 percent of Fortune 1000 companies that offer retiree health care benefits prefund them, at a median funding level of 37 percent. Further, in November 2009, we found that 18 states and 13 of the 39 largest local governments had set aside at least a combined $25 billion in assets to cover their OPEB liabilities. Although the majority of federal civilian agencies do not prefund these benefits, a few small, civilian, federal agencies do so. In addition, the Department of Defense (DOD) prefunds its retiree health benefits for Medicare-eligible retirees and beneficiaries, with a 100 percent target funded percentage. This fund was started in 2002 in reaction to rapidly rising health care costs. The fund had assets of $166 billion as of September 30, 2010, which represented a funding level of 38 percent. DOD does not prefund its pre-Medicare-eligible retiree health benefits, although its independent Board of Actuaries has recommended that it consider prefunding these costs as well, in order to reflect the full costs of these future benefits and promote a better understanding of the program’s value. While private sector, state and local government, and other federal entities generally are not required to prefund these benefits, most are required to recognize the future costs of these benefits on an accrual basis as they are earned, rather than when they are paid, in their financial reporting. Standards governing financial reporting (i.e., accounting) are separate and apart, and under different jurisdiction, from any laws, regulations, or rules governing prefunding. In contrast to most other federal entities, USPS reports under private sector (FASB) accounting standards, and follows FASB’s multiemployer accounting rules, rather than FASB’s single-employer accounting rules, in reporting its participation in FEHBP. These multi-employer standards exempt employers from reporting the cost of these retirement benefits on an accrual basis. Instead, expense for a year is set equal to required cash payments—which currently for USPS means the sum of its required prefunding payment and its share of premium payments that it pays directly—while no liability is shown on the USPS balance sheet except for any required payments that have been missed (such as the missed fiscal year 2011 and 2012 prefunding payments). In contrast, if USPS were following FASB’s single-employer accounting standards, USPS would show a liability on its balance sheet for the entire unfunded liability, and expense for a year would be an actuarially determined accrual cost independent of whether USPS had to make a small or large prefunding payment for that year. In 2002, the Comptroller General wrote to the Postmaster General and, based on a reassessment of the applicability of multiemployer versus single-employer accounting standards to USPS, suggested that USPS reassess its accounting treatment of retiree health benefits, and consider accounting for its retiree health benefits on an accrual basis, meaning, to consider adopting the single-employer accounting procedures. A basic premise behind the exemption from accrual accounting for multiemployer plans was that the liability for an individual employer would be difficult to determine and would be of limited value, a premise that is not the case for USPS. Recognizing the cost of retiree health benefits on an accrual basis for financial reporting enhances the transparency of the cost of these benefits even in the absence of prefunding. As a result, in situations where prefunding requirements do not exist or are significantly relaxed or eliminated, accrual accounting provides an important function by recognizing the costs of these future benefits even in the absence of prefunding. It can also be the case that a year’s accrual cost can be lower than the amount funded. For example, the fixed payments required under current law may well be higher than the annual accrual cost that USPS would recognize under single employer accounting, although, again, the full unfunded liability would be recognized on the balance sheet. Note that there is one other significant program, federal workers’ compensation under the Federal Employees’ Compensation Act, for which USPS’s financial reporting is based on actuarial projections of future benefits rather than on its annual required cash payments. USPS pays the Department of Labor each year for the cash benefits to current beneficiaries, but USPS records a liability on its balance sheet for the entire actuarial present value of future benefits for those who have already been injured, and recognizes the growth in this liability as an expense each year. This unfunded FECA liability on USPS’s balance sheet was $17.6 billion as of September 30, 2012. Timely action is essential in addressing the funding of USPS’s retiree health benefits. We have suggested that Congress must take action to address the uncertainty related to: 1) USPS’s inability to meet the current retiree health prefunding 2) reducing the unfunded retiree health benefit liability over time, 3) determining the proper allocation of costs between current and 4) enacting comprehensive postal reform legislation that would improve prospects for USPS’s long-term financial viability. USPS’s recent defaults on its retiree-health- prefunding payments and its inability to borrow now that it has reached its $15 billion borrowing limit create an even more urgent need for congressional action. The continued uncertainty around resolution of USPS’s financial problems and the funding of these payments creates uncertainty for mailers in developing their business plans, an uncertainty that could negatively affect mailers’ willingness to use USPS’s services. As noted earlier, USPS has also proposed withdrawing from FEHBP and administering its own health care plan for both workers and retirees, a proposal that is the subject of our ongoing work in another study. Congress should also consider how quickly and to what level prefunding of retiree health benefits should occur. As previously cited, deferrals and lower payments in the short-term will reduce USPS’s reported financial losses in the short-term, but would increase its unfunded retiree health benefit liability and require larger annual payments in the future; yet at the same time, currently required short-term payments are higher than what would be required under the actuarial approach that begins in 2017. Both of these points raise issues regarding fairness to future and current ratepayers. Furthermore, postal ratepayers provide USPS with funding, but as mail volumes decline, there may be fewer ratepayers in the future to pay for deferred costs. In addition, the less USPS reduces its retiree health unfunded liability, the greater the potential consequences, with unclear impact, if USPS is ultimately unable to pay this unfunded liability. In considering the options for USPS to address its retiree health benefit liability, Congress should keep in mind that stopping or deferring prefunding of these benefits would serve as short-term relief, but would also increase the risk that USPS may not be able to make future payments if its core business continues to decline. Therefore, we continue to believe it is important that USPS prefund its retiree health benefit liability to the maximum extent that its finances permit. None of the funding approaches will be viable unless USPS has the ability to make the required payments. Without congressional or further USPS actions to cut postal costs, USPS will not have the finances needed to make annual payments in the short term and reduce its retiree health unfunded liability over the long term. USPS has stated that it will be unable to make any prefunding payment toward reducing its retiree health unfunded liability if it continues to experience cash flow difficulties. While USPS may have limited control of its revenue stream because of advances in technological communication, it is important that USPS reduce its expenses to avoid even greater financial losses, repay its outstanding debt, and increase capital for investment. Consequently, as we have repeatedly stated, Congress and USPS need to reach agreement on a comprehensive package of actions to improve USPS’s financial viability. In previous reports, we have provided strategies and options, to both reduce costs and enhance revenues, that Congress could consider to better align USPS costs with revenues and address constraints and legal restrictions that limit USPS’s ability to reduce costs and improve efficiency. Implementing strategies and options to better align costs with revenues may better enable USPS to be in a financial position to prefund its retiree health benefit liability for its over one million active and retired postal employees and their beneficiaries. We provided a draft of this report to USPS, the USPS OIG, and OPM for review and comment. USPS and the USPS OIG provided comments, which are reprinted in appendixes III and IV, respectively. USPS and the USPS OIG did not disagree with the report’s conclusions and analysis about the trade-offs involved with the alternative funding approaches, but both commented that USPS cannot afford to make prefunding payments and provided additional context. OPM had no comments but provided technical clarifications, which we incorporated into the report as appropriate. USPS agreed that comprehensive reform is necessary to achieve financial sustainability. It also recognized its obligation to provide effective, affordable health benefits to its employees and retirees, but said that it does not have the financial resources to make prefunding payments required by current law. Further, USPS said that releasing this report is inappropriate because, in its view, the solution to managing its health care costs is to reduce the cost of future health care coverage by allowing USPS to sponsor its own medical plan. In response to USPS’s comment, we noted in the report that adopting any of the prefunding approaches analyzed in this report would not preclude USPS from continuing to pursue its proposal to administer its own plan, and that any resulting expected cost savings would be reflected in a lower unfunded liability and lower actuarially determined prefunding payments than otherwise. As USPS noted, we are currently reviewing USPS’s proposal to administer its own plan. The USPS OIG concurred with our analysis of the trade-offs among the alternative funding approaches that would result in paying more now or in the future, but stated its concern that the report needed additional context in four areas: 1) historical, 2) financial, 3) use of other assets to satisfy the retiree health benefit obligation, and (4) the problems with prefunding. First, the USPS OIG stated that USPS started prefunding its retiree health benefits as a result of the discovery that, because of external fund management misjudgments, it was on track to seriously overfund its pension obligations by $78 billion. The USPS OIG also said that a decision to turn a mistake into a second prefunding obligation created its own problems, including a 10-year schedule of prefunding payments that was structured toward a 100 percent funding goal, and that the aggressive payment schedule appears to have been set based on byzantine “budget scoring” considerations rather than actuarial assumptions or an evaluation of USPS’s ability to make the payments. In our report, we noted USPS’s reduction in pension contributions to the Civil Service Retirement System occurred as a result of the Postal Civil Service Retirement System Funding Reform Act of 2003, which switched the actuarial basis for future contributions to “dynamic” assumptions from the “static” assumptions that OPM projected would result in overfunding. Further, we pointed out that the 10-year schedule of prefunding payments for fiscal years 2007 through 2016 was not based on an actuarial assessment, and that the remaining required payments through fiscal year 2016 are significantly in excess of what would be calculated under the actuarial approach that begins in fiscal year 2017. We also noted that USPS proposed prefunding to Congress in 2003. Second, the USPS OIG discussed several points in a financial context. It said that USPS has never been able to afford a single payment—that it has either borrowed from the U.S. Treasury to make prefunding payments to date or that it has defaulted on them. However, we noted in our report that from fiscal year-end 2006 to 2010, USPS made total prefunding payments of $17.9 billion while borrowing an additional $9.9 billion from the U.S. Treasury. The USPS OIG also stated that now that the USPS has reached the limit of the amount it can borrow, it can no longer make the payments. We noted in our report that none of the funding approaches will be viable unless USPS has the ability to make the required payments, and that a comprehensive package of actions is needed to improve USPS’s financial viability. The USPS OIG also said that its “seal and grow” proposal was made in the context of USPS’s urgent financial situation and was meant as a temporary—not permanent—measure, and that we mistakenly represented it as a permanent payment plan. Our report actually noted that the Seal and Grow Approach was intended to continue until USPS’s liability was fully funded—meaning, not thereafter; we added additional wording to clarify this point. The USPS OIG also pointed out that USPS has substantially funded its retiree benefit programs, with its pensions fully funded and its retiree health benefits half funded, with enough to cover current retirees. We did note in our report that the retiree health benefit liability is 49 percent funded and that approximately half of the liability is for current retirees. As for pensions, USPS reported in its most recent annual financial report (10-K) for fiscal year 2012 that it had an unfunded pension liability of almost $16 billion, which represented a 95 percent funded percentage (i.e., close to fully funded), based on a projected year-end fund balance of $285 billion and a liability of $300 billion; the prior year’s estimate had indeed been a pension surplus. Third, the USPS OIG stated that our report did not adequately explore the use of other assets USPS holds as a means of satisfying its retiree health benefit obligation. The USPS OIG noted that it has reported on two sources of assets worth billions that could be used to cover any unfunded obligation, including 1) an estimated $85 billion in real estate holdings and 2) surpluses in USPS’s pension funds. As we noted in our report, USPS has stated that it does not believe that USPS-occupied real estate would be a suitable asset within the PSRHBF because employer-occupied real estate cannot be readily sold to provide cash when needed to pay benefits. We noted that in the event of USPS’s being unable to fund its liabilities, USPS might have other debts and obligations in addition to unfunded retiree health benefit liabilities for which any available real estate proceeds would be needed. We noted that we reported on options and considerations with regard to any USPS pension surplus (in particular regarding FERS) in a prior report. Finally, the USPS OIG commented that our report should examine the problems of prefunding and examine why no business or government entity has taken advantage of prefunding, and that making prefunding payments at the current levels will bankrupt USPS. Our report did discuss these issues, beginning with the section entitled, Comparison with Other Entities. While our report did not examine comprehensively the reasons for other entities’ prefunding decisions, we noted that although prefunding is not required, a number of private, state, local, and federal entities have elected to prefund some percentage of their retiree health benefits, as follows: Standard & Poor’s (S&P) reported that 126 of the 296 companies in the S&P 500 that offered “other post-employment benefits” (OPEB) prefunded some percentage of the associated liabilities; the USPS OIG reported that 38 percent of Fortune 1000 companies that offer retiree health benefits prefund them, at a median funding level of 37 percent; 18 states and 13 of the 39 largest local governments had set aside at least a combined $25 billion in assets to cover their OPEB liabilities; and the Department of Defense prefunds its retiree health benefits for Medicare-eligible retirees and beneficiaries, with a 100 percent target funding percentage, and that this fund, which was started in 2002 in reaction to rapidly rising health care costs, had assets of $166 billion as of fiscal year-end 2010. We also recognized USPS’s inability to meet the current retiree health prefunding requirements along with the need for comprehensive legislative action. Specifically, we said, “None of the funding approaches will be viable unless USPS has the ability to make the required payments. Without congressional or further USPS actions to cut postal costs, USPS will not have the finances needed to make annual payments in the short term and reduce its retiree health benefit liabilities over the long term.” As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to the appropriate congressional committees, the Postmaster General, OPM, the USPS Inspector General, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions on this report, please contact Frank Todisco at [email protected]; Lorelei St. James at [email protected]: or call (202) 512-2834. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Contact information and key contributors to the report are listed in appendix V. Frank Todisco Chief Actuary Applied Research and Methods Lorelei St. James Director Physical Infrastructure Issues The undersigned meets the qualification standards of the American Academy of Actuaries to render the actuarial findings contained in this report. This report (1) describes the status and financial outlook of the Postal Service Retiree Health Benefits Fund (PSRHBF), (2) analyzes how alternative proposals for funding retiree health benefits could affect future USPS payments and unfunded liabilities, and (3) determines key considerations for policymakers assessing the alternative proposals or other approaches. To describe the status and financial outlook of the PSRHBF, we reviewed and summarized USPS financial data regarding payments made to the fund, interest earned from such contributions, overall fund balance, and retiree health benefit liability. We also reviewed our prior work and reviewed and summarized reports and data from USPS and others on how USPS’s financial condition has changed since 2006. We reviewed relevant statutes, proposed legislation, and sections of the President’s budget request for fiscal year 2012 pertaining to USPS’s health and pension benefit programs. We also interviewed USPS and OPM officials on the status and financial outlook of the PSRHBF. To determine the impact on USPS payments and unfunded liabilities under alternative approaches to fund retiree health benefits, we analyzed and compared current funding requirements and five alternatives. We interviewed USPS officials on the USPS’s ability to meet future mandated payments and to obtain information on current and projected employee (FTE) levels, compensation, and revenue. In addition, we met with OPM officials to discuss projection methodology, and assumption selection, for using the data provided by USPS to project future premium payments, normal costs, and liabilities. OPM provided us projections of these amounts, which we further analyzed to project future prefunding contributions and unfunded liabilities under the different approaches to prefunding that we analyzed. Additional information on data, assumptions, and methods is provided below. To determine key factors for policymakers to consider when assessing alternative approaches, we used our own actuarial judgment and expertise. We also examined prefunding requirements for retiree health benefits, and prefunding behavior, of other entities (federal, state, and local governments and private sector). In addition, we examined financial reporting requirements applicable to other entities for these benefits, reviewing relevant accounting standards promulgated by the Financial Accounting Standards Board (FASB), Governmental Accounting Standards Board (GASB), and Federal Accounting Standards Advisory Board (FASAB); we compared these standards to USPS’s financial reporting for these benefits. We relied on OPM’s actuarial projections of normal cost, accrued liability (referred to in the report, and in the remainder of this appendix, simply as “liability”) and premium payments. We obtained data on workforce projections from USPS, as described further below, which we projected further and supplied to OPM for use in the projections. OPM’s valuation of the cost of USPS’s retiree health benefit obligations entails the collection and analysis of participant data and claims cost data, the setting of demographic and economic assumptions, and the application of these data and assumptions to the provisions of the benefit program. We had extensive discussions with OPM regarding its valuation methodology and were satisfied with the reasonableness of the approach with regard to the issues discussed. However, we did not otherwise audit or evaluate OPM’s actuarial assumptions, methodology, calculations, or underlying data. Such an evaluation would have required a substantial amount of additional work beyond the scope of our assignment, and would also have required engaging additional actuarial resources with particular expertise in the valuation of health care benefits. For projecting the most recent valuation results into the future, we selected the methodology and projection assumptions in consultation with OPM. Additional detail on OPM’s methods and assumptions is available from OPM. It should be understood that projections of this type contain a significant degree of uncertainty, as discussed further in the section of the report on Sensitivity to Assumptions. Nonetheless, given the magnitude of the liabilities and the importance of being able to pay for these benefits, reasonable projections of these costs and liabilities provide essential information for enabling responsible stewardship of resources. OPM provided us with projected normal cost and premium payments for each year through 2040. OPM calculated and provided us with projected liability as of three points: the end of 2010 (the measurement date of the most recent data collection at the time of our request), the end of 2021, and the end of 2040. We used a linear interpolation to estimate the liability for each of the intervening years. For each future year, we calculated the prefunding contribution, based on the normal cost and unfunded liability, when an actuarial approach applied; rolled the assets forward by adding the prefunding contribution and investment income and subtracting premium payments, as applicable; calculated the next year’s unfunded liability based on these projected assets and the projected liability for that year; calculated the next year’s prefunding contribution based on this new unfunded liability; and so on to the end of the projection period. The calculation of the prefunding contribution—as well as the applicability of fixed versus actuarially determined contributions and whether premium payments came out of the fund—was based on the provisions of the prefunding approaches we modeled, as described in the main body of this report (table 2 and preceding text). Where an actuarially determined prefunding contribution was used, it was the sum of the normal cost and an amortization payment (mortgage-style amortization calculation) calculated to pay off the unfunded liability in equal annual installments. Note that under the terms of the Senate Bill, which uses an 80 percent funded percentage target instead of 100 percent, the amortization is based on , rather than , and 100 percent of the normal cost is added to the amortization payment, rather than 80 percent of the normal cost. OPM’s projections of liabilities are based on the current level of plan health benefits and do not reflect any proposals to reduce the actuarial value of benefits. USPS has proposed withdrawing from the Federal Employees Health Benefits Program (FEHBP) and administering its own health care plan for its employees and retirees. This report looks at retiree health benefit funding options assuming that USPS continues to participate in FEHBP under current provisions. We will be issuing a separate report on USPS’s proposal to administer its own health care plan. OPM’s projections also reflect the projected changes over time in the U.S. Treasury’s share of USPS’s retiree health benefit costs. U.S. Treasury is responsible for the portion of USPS’s share of retiree health benefit premiums attributable to service prior to 1971, when the Post Office Department was transformed into the USPS. The U.S. Treasury’s share of costs is diminishing over time as the proportion of retirees who had pre-1971 service decreases. One of the factors affecting future changes in USPS’s liability for retiree health benefits is the size of its future workforce. The liability grows with future accruals of employee service and is also affected by when employees retire. USPS provided us with projected counts of career employees from 2011 through 2020. USPS noted that its intermediate- term planning horizon was through 2016 and that because of the rapidly changing nature of the mailing environment and the overall economy, projections beyond that point are likely to have a higher margin of error. USPS’s projection had its career-employee complement dropping, from 561,000 in 2011 and 534,000 in 2012 (representing approximate averages over the fiscal year) to approximately 416,000 by 2016 and to 392,000 by 2020. USPS told us that it would be reasonable to assume that the complement would stabilize at that level thereafter. We assumed a constant career workforce of 392,000 for the remainder of the projection from 2020 through 2040. USPS viewed this workforce projection as its optimal, target workforce path, assuming USPS would be able to achieve certain objectives regarding its network and other operational issues. It noted that its ability to achieve these reductions remains to be determined, and would be affected by negotiations with unions and any congressional actions. USPS also noted that its workforce projections were based on long-term projections of mail volume. There is, of course, uncertainty regarding future levels of mail volume. OPM found that using its standard valuation assumptions for such factors as employee retention and retirement, and adding in an amount of new hires necessary to stay on target, its projection model reasonably approximated USPS’s projected workforce path. Based on this workforce path and the number of projected retirements and other workforce reductions, OPM projected some new hiring to begin in 2014, and to continue as necessary to keep the workforce constant after 2020. OPM based new hire demographic profiles on the government-wide distribution of recent hires, since USPS has not been hiring enough recently to have adequate data for that purpose. So that we could also calculate USPS payments and unfunded liabilities as a percentage of employee compensation, USPS provided us with projections of compensation (salary and wages and benefits) to accompany the workforce projections, through 2020. The data provided by USPS encompassed salary plus a portion of employee benefits; it did not include retiree health benefits, worker’s compensation, or any forecasted contract negotiations savings. For simplicity, we refer to these amounts as “compensation.” We projected these compensation amounts beyond 2020 to 2040. Since we assumed the USPS workforce to be constant over that period, we projected total compensation to increase by inflation plus one percent. USPS had provided us with two sets of compensation projections through 2020: one based on USPS’s own internal inflation assumption ranging from 1.7 to 2.2 percent annually over that period; and a second, at our request, assuming 3.0 percent inflation. We estimated an additional compensation projection based on 2.4 percent inflation from these data. We used the two sets of compensation projections—one based on 2.4 percent inflation and one based on 3.0 percent inflation—for our projections under the current law assumption basis and the Senate bill assumption basis, respectively. Liabilities and normal costs are based on the “Aggregate Entry Age Normal” actuarial cost method. A per-participant normal cost rate is determined based on an aggregate ratio of present value of future benefits at entry age to present value of future service at entry age, with service weighted to increase with medical inflation and with the accrual period from entry age to assumed retirement. The normal cost rate is computed based on the demographics and claims’ costs of the entire FEHBP population, not just the USPS population, to reflect how the plan actually works. OPM would need additional USPS-specific data to determine a USPS-specific normal cost. The accrued liability is equal to the present value of future benefits (PVB) minus the present value of future normal costs. The PVB is just for the USPS population, but based on demographic assumptions for the entire FEHBP population, and without USPS-specific utilization, as this is how FEHBP premiums are determined. The actuarial cost method is the same one used by OPM in its financial reporting of the cost of these benefits (as required under FASAB accounting standards) and the same one used by OPM for determining funding requirements for the CSRS and FERS federal employee pension programs. Other actuarial cost methods could for determining USPS prefunding requirements, reasonably be adoptedsuch as the projected unit credit method (which is also the method used for single employer accounting under FASB). The actuarial cost method determines the portion of future retiree costs that are attributable to each year of employee service, and different methods build up the accrued liability more or less quickly over the working years. As discussed in the body of the report, OPM provided current and projected liabilities, normal costs, and premium payments on two different assumption bases: (1) the current law basis, which ties funding assumptions to those used by OPM for its financial reporting, which in turn is guided by the FASAB accounting standards and (2) the Senate bill basis, which ties funding assumptions to those assumptions used by OPM to determine USPS’s funding requirements for CSRS and FERS. The assumptions differ with respect to discount rate, general inflation, and medical inflation (trend). These assumptions are disclosed in table 5 in the report. Demographic assumptions that are common to both the current law and Senate bill assumption bases can be found in OPM’s most recent funding valuation report for CSRS and FERS, though these are applied on a per-participant basis in the retiree health valuation and on a dollar-amount basis in the pension valuations. OPM also assumes that present retiree participation rates in FEHBP, calculated by age and gender, continue into the future. The discount rate of 4.90 percent used for the current law assumption basis, which is the discount rate used by OPM in its reporting at September 30, 2011, represents the single rate equivalent to a 10-year average of Treasury yield curves, with yield curve maturities matched to the timing of projected payments, a methodology that satisfies SFFAS 33. We assumed that the discount rate would remain at 4.90 percent in future years. In fact, in each future year, a new 10-year average discount rate will be developed, and if interest rates were to remain unchanged from present levels, this would result in a lower future discount rate, as higher interest rates at the beginning of the 10-year averaging period are replaced by lower interest rates at the end of the averaging period. OPM indicated that modeling such changes would present significant computational difficulties. discount rate, which implies that interest rates would rise from current low levels. In making this assumption, we noted that a steady 4.90 percent discount rate is still significantly lower than the 5.75 percent discount rate assumed for the Senate bill assumption basis, and so still provides useful information regarding potential effects of variations in assumptions. We also note that the medical inflation assumption used in the projection was developed to be consistent with the discount rate and general inflation assumption (the latter is also based on a 10-year average), and that OPM’s model would produce a lower medical trend assumption if discount rates and inflation assumptions were to decrease, offsetting much of the effect of the lower discount rate. Also, in projecting future premium payments, which went into projecting future liabilities, OPM did not “restart” the trend assumption vector each year. OPM does not normally project future liabilities. It needs to calculate current liabilities for future payments each year, but fulfilling its mission does not require any calculation of future liabilities. As such, OPM did not have previously developed software to do such projections, and had to do special programming specifically for this request. A final methodological decision that had to be made was whether the projection assumptions should differ from the valuation assumptions. In actuarial projections, there is a distinction between “valuation assumptions” and “projection (or experience) assumptions.” Valuation assumptions are those used to compute the liability and normal cost at any point in time. Projection assumptions model what actually happens as you move the projection forward, which might differ from the expectations embedded in the valuation assumptions. The Senate Bill specifies a different assumption basis than current law, the House Bill, or the Administration’s Approach, but these specifications are referring to valuation assumptions. While different valuation assumptions might be used, only one scenario can actually unfold in the real world. One way to reflect this situation in a projection would be to retain the different valuation assumptions for the different prefunding approaches, but then to project all the approaches under a uniform set of projection assumptions. However, this approach would create false precision, because at some point the valuation assumptions would change to reflect emerging experience, and the projection would then need to incorporate additional assumptions as to when that would happen. Accordingly, as a reasonable approach to compare the four prefunding approaches on an apples-to- apples basis, we modeled them under uniform assumptions—first using the current law assumption basis, with results presented in the main body of the report, and then using the Senate bill assumption basis, with results presented in appendix II of this report. As discussed further in the section of the report on “Sensitivity to Assumptions,” it turns out that these two assumption bases do not produce significant differences in basic findings because of the offsetting effects of different discount rates and medical inflation assumptions. We conducted this performance audit from May 2012 through December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As discussed in the report, we projected USPS’s annual payments and unfunded liability under four prefunding approaches (Modified Current Law, Modified House, Modified Administration, and Modified Senate) and under two different assumption bases: the assumption basis specified in current law, and the assumption basis specified in the Senate bill. The differences between these two assumption bases are described in the report. The report presents projection results based on the current law assumption basis. This appendix presents the corresponding results based on the Senate bill assumption basis. As discussed in the report, our findings and conclusions are not materially different under the two different assumption bases. In addition to the individuals named above, Samer Abbas, Teresa Anderson, Beryl Davis, John Dicken, Kim Granger, Jacquelyn Hamilton, Hannah Laufe, Jennifer Leone, W. Stephen Lowrey, Kim McGatlin, Jonathan McMurray, Kristi Peterson, Steve Robblee, Amy Rosewarne, Aron Szapiro, and Crystal Wesco made key contributions to this report.
PAEA required USPS to prefund its future retiree health benefits as part of comprehensive postal reform by establishing the PSRHBF along with an initial target period to fund the unfunded liability in 50 years. This requirement included annual payments to this fund from 2007 to 2016 of between $5.4 billion to $5.8 billion. USPS, its employee groups, and others have argued that this prefunding requirement is a major source of USPS's financial woes--reported by USPS as contributing $32 billion toward its $41 billion of net losses over the past 6 years. USPS defaulted on the last 2 years of PSRHBF payments totaling $11.1 billion. As requested, this report addresses the (1) status and financial outlook of the PSRHBF, (2) impact on future annual USPS payments and unfunded liabilities of alternative approaches, and (3) key considerations for policymakers. GAO reviewed and summarized PSRHBF financial data and analyzed and compared current law requirements with five alternative approaches by developing projections based on OPM and USPS data. The Postal Service Retiree Health Benefits Fund (PSRHBF) covered about 49 percent of the U.S. Postal Service's (USPS) $94 billion retiree health benefit liability at fiscal year-end 2012. USPS's deteriorating financial outlook, however, will make it difficult to continue the current prefunding schedule in the short term, and possibly to fully fund the remaining $48 billion unfunded liability over the remaining 44 years of the schedule on which the 2006 Postal Accountability and Enhancement Act (PAEA) was based. The liability covers the projected benefits for about 471,000 current postal retirees and a portion of the projected benefits for about 528,000 current employees; it does not cover employees not yet hired. Under PAEA, USPS is responsible for contributing an additional $33.9 billion to the PSRHBF by fiscal year 2017, including the $11.1 billion USPS has defaulted on over the past 2 years. PAEA also requires the Office of Personnel Management (OPM) to calculate the remaining unfunded liability in 2017 and develop an initial 40-year amortization payment schedule. USPS, however, projects further declines in mail volume and revenues that may continue to limit its ability to prefund the remaining retiree health benefit liability. GAO's analysis of maintaining current law requirements compared to five alternative approaches showed differing impacts on USPS's future annual payments and unfunded liabilities. For example, three of the approaches--1) the Administration's Approach, 2) Senate Bill (S. 1789) and 3) "Pay-as-You-Go" (no prefunding)--would reduce USPS's annual payments in the short term, thereby easing its immediate cash flow problems and financial losses. However, these approaches would increase USPS's unfunded liability, sometimes substantially, and require larger payments later. Deferring funding could increase costs for future ratepayers and increase the possibility that USPS may not be able to pay for some or all of its liability. Conversely, a fourth approach--the House Bill (H.R. 2309)--and the current law requirement would reduce USPS's unfunded liabilities more aggressively but may result in significantly higher USPS financial losses in the near future. If USPS stopped prefunding and let the existing fund grow with interest, the unfunded liability is projected to significantly increase. Under a fifth approach, if USPS stopped prefunding and used the existing fund to pay current and future premiums, the fund is projected to be exhausted by 2026. Private sector, state, local, and other federal entities are not required to prefund these benefits, though some do so to a limited extent, and most are required to recognize the future costs in their financial reporting. GAO identified several key considerations including: (1) the rationale and consequences of prefunding such benefits; (2) trade-offs affecting USPS's financial condition, such as sizes of the annual payments and unfunded liability; (3) fixed versus actuarially determined payments; (4) targeted funding levels; and (5) assumption criteria. USPS is intended to be a self-sustaining entity funded almost entirely by postal ratepayers, but its financial losses are challenging its sustainability. GAO has testified that USPS should prefund its retiree health benefit liabilities to the maximum extent that its finances permit, but none of the funding approaches may be viable unless USPS has the ability to make the payments. USPS's default on its last two required PSRHBF payments and its inability to borrow further make the need for a comprehensive package of actions to achieve sustainable financial viability even more urgent. GAO is not making new recommendations in this report, as it has already reported on strategies and options for USPS to achieve sustainable financial viability.
Since it was launched in 1990, the Hubble Space Telescope has sent back images of space that have made a significant contribution to our understanding of the universe. The telescope uses pointing precision, powerful optics, and state-of-the-art instruments to explore the visible, ultraviolet, and near-infrared regions of the electromagnetic spectrum. To keep it at the forefront of astronomical research and extend its operational life, Hubble’s instruments have been upgraded through a series of shuttle servicing missions. The fifth and final planned servicing mission was intended to install new science instruments, replace the telescope’s insulation, and replace the batteries and gyroscopes. According to NASA, the lifetime of the observatory on orbit is ultimately limited by battery life, which may extend into the 2007-2008 time frame, but scientific operations are limited by the gyroscopes that stabilize the telescope—whose lifetimes are more difficult to predict. NASA forecasts that the Hubble will likely have fewer than three operating gyroscopes by mid-2006, and fewer than two by mid-2007. In response to congressional concerns about NASA’s decision to cancel the servicing mission, NASA requested that the National Research Council conduct an independent assessment of options for extending the life of the Hubble Space Telescope. In May 2004, the Council established a committee to assess the viability of a shuttle servicing mission, evaluate robotic and ground operations to extend the life of the telescope as a valuable scientific tool, assess telescope component failures and their impact, and provide an overall risk-benefit assessment of servicing options. In an interim report issued in July 2004, the committee urged NASA to commit to a Hubble servicing mission that accomplishes the objectives of the canceled servicing mission and to take no actions that would preclude using a space shuttle to carry out this mission. According to a NASA official, the agency is not actively pursuing the shuttle servicing option but is not precluding it. NASA is currently evaluating the feasibility of performing robotic servicing of the Hubble Telescope. To facilitate the evaluation, the agency has formulated a robotic mission concept, which includes a vehicle comprised of a robotic servicing module and another module that can be used to eventually de-orbit the telescope. The potential task list of activities for robotic servicing includes replacing the gyroscopes and batteries, installing new science instruments, and de-orbiting the observatory at the end of its life. According to a NASA official, contracts to facilitate the robotic mission were recently awarded for work to begin on October 1, 2004. The CAIB concluded that the Columbia accident was caused by both physical and organizational failures. The Board’s 15 return to flight recommendations necessary to implement before the shuttle fleet can return to flight primarily address the physical causes of the accident and include eliminating external tank debris shedding and developing a capability to inspect and make emergency repairs to the orbiter’s thermal protection system. NASA publishes periodic updates to its plan for returning the shuttle to flight to demonstrate the agency’s progress in implementing the CAIB recommendations. The most recent update is dated August 27, 2004. This update identifies the first shuttle flight as occurring in spring 2005. NASA does not currently have a definitive cost estimate for servicing the Hubble Telescope using the shuttle. The agency focused on safety concerns related to a servicing mission by the space shuttle in deciding not to proceed, and did not develop a cost estimate. At our request NASA prepared an estimate of the funding needed for a Hubble servicing mission by the space shuttle. NASA could not provide documented support for its estimate. The agency recognizes that there are many uncertainties that could change the estimate. NASA has now begun to explore the costs and benefits of various servicing alternatives, including robotic servicing, which should enable NASA to make a more informed decision regarding Hubble’s future. At our request NASA began development of an estimate of the funding needed for a shuttle servicing mission to the Hubble. The estimate provided captures additional funds over and above NASA’s fiscal year 2005 budget request that would be required to reinsert the mission in the shuttle flight manifest for launch in March 2007. The estimate does not include funding already expended to support the canceled servicing mission and develop the science instruments. NASA has determined that the additional funds needed to perform a shuttle servicing mission for Hubble would be in the range of $1.7 billion to $2.4 billion. According to NASA, this estimate is based on what it might cost, but it does not take into account the technical, safety, and schedule risks that could increase the cost and/or undermine the viability of the mission. For example, NASA cites uncertainties related to two safety-related requirements: inspection and repair and crew rescue mission capabilities that would be autonomous of the International Space Station and for which NASA currently has not formulated a design solution. In addition, NASA cautions that it did not examine whether design solutions could be accomplished in time to service Hubble before it ceases operations. Table 1 shows NASA’s budget estimate phased by fiscal year (FY) for shuttle servicing of the Hubble Space Telescope, including ranges for some of the estimates. While we did not independently verify each component of NASA’s estimate, we requested that NASA provide the analytical basis and documentary support for selected portions of the estimate, primarily those with large dollar values. NASA could not provide the requested information. For example, NASA officials told us that the Hubble project’s sustaining engineering costs run $9 to10 million per month, but they were unable to produce a calculation or documents to support the estimate because they do not track these costs by servicing mission. We also requested the basis of estimate for the costs to delay shuttle phase-out and for tools development for vehicle inspection and repair without the International Space Station (a component of extravehicular activity above). In response, NASA provided the assumptions upon which the estimates were based and stated that the estimates were based on information provided by Johnson Space Center and Kennedy Space Center subject matter experts. NASA also added that rigorous cost estimating techniques could not be applied to the tools development estimate because a rescue mission currently is only a concept. No analytical or documentary support was provided. In estimating the cost for the autonomous inspection and repair and rescue mission capabilities, NASA used a 30 to 50 percent uncertainty factor because of the very high uncertainty in the cost of developing and conducting a mission that is not adequately defined—i.e., NASA’s estimate of $425 million plus 50 percent equals the $638 million upper range shown in the table above for these two items added together. As with the other estimates for which we requested analytical and documentary support, NASA was not able to provide it because the agency could not do a risk analysis without a design solution, according to a NASA official. The lack of documented support for portions of NASA’s estimate increases the risk of variation to the estimate. Further, NASA recognizes that there are many uncertainties that could change the current estimate. The 2004 NASA Cost Estimating Handbook states that cost analysts should document the results of cost estimates during the entire cost estimating process and that the documentation should provide sufficient information on how the estimate was developed so that independent cost analysts could reproduce the estimate. According to the handbook, the value of the documentation and analysis is in providing an understanding of the cost elements so that decision-makers can make informed decisions. Recently, we also reported that dependable cost estimates are essential for establishing priorities and making informed investment decisions in the face of limited budgets. Without this knowledge, a program’s estimated cost could be understated and thereby subject to underfunding and cost overruns, putting programs at risk of being reduced in scope or requiring additional funding to meet their objectives. Since we began our review, attention has focused on alternatives to a shuttle mission, such as robotic servicing of Hubble. NASA has formed a team to evaluate Hubble servicing alternatives, including cost information. This analysis should enable NASA to make a more informed decision about Hubble’s future and facilitate NASA’s evaluation of the feasibility of robotic servicing options. Currently, NASA has developed budget estimates for implementing the CAIB recommendations required to return the space shuttle to flight but not for all of the CAIB recommendations. NASA provided us with documentary support for portions of the return to flight estimate, but we found it to be insufficient. According to NASA, the agency’s cost for returning the shuttle to flight, which is slightly over $2 billion, will remain uncertain until the completion of the first shuttle missions to the International Space Station in fiscal year 2005. NASA’s return to flight activities involve enhancing the shuttle’s external tank, thermal protection system, solid rocket boosters, and imagery system to address the physical cause of the Columbia accident—a piece of insulating foam that separated from the external tank and struck a reinforced carbon-carbon panel on the leading edge of the orbiter’s left wing. To address this cause, NASA is working to eliminate all external tank debris shedding. Efforts are also in place to improve the orbiter’s thermal protection system, which includes heat resistant tiles, blankets, and reinforced carbon-carbon panels on the leading edge of the wing and nose cap of the shuttle, to increase the orbiter’s ability to sustain minor debris damage. NASA is also redesigning the method for catching bolts that break apart when the external tank and solid rocket boosters separate as well as providing the capability to obtain and downlink images after the separation. NASA and the United States Air Force are working to improve the use of ground cameras for viewing launch activities. Table 2 shows NASA’s budget estimates for return-to-flight activities. However, the majority of NASA’s budget estimates for returning the shuttle to flight are not fully developed—including those for fiscal year 2005—as indicated by the agency’s internal approval process. The Program Requirements Control Board (PRCB) is responsible for directing studies of identified problems, formulating alternative solutions, selecting the best solution, and developing overall estimates. According to NASA, actions approved with PRCB directives have mature estimates, while those with control board actions in process—that is currently under review but with no issued directives yet—are less mature. Both the content and estimates for return to flight work that have not yet been reviewed by the control board are very preliminary and subject to considerable variation. Table 3 shows the status of control board review of NASA return to flight budget estimates and the percent of the total estimate at each level of review. NASA provided us with the PRCB directives and in some cases, attachments which the agency believes support the estimate. However, we did not find this support to be sufficient. According to NASA’s cost estimating handbook, estimates should be documented with sufficient detail to be reproducible by an independent analyst. Nevertheless, in many cases, there were no documents attached to the directive, and in cases where documents were attached to the directives, the documents generally provided high-level estimates with little detail and no documentation to show how NASA arrived at the estimate. For example, a request for $1.8 million to fund the network to support external tank camera transmissions indicated that $1.516 million of the amount would be needed for Goddard Space Flight Center to provide the necessary equipment at receiving stations, labor, subcontractor costs, and travel and that the remaining $290,000 would be needed for improvements to the receiving antennas ($104,000) and recurring costs ($62,000 per flight) for three trucks and the associated transponder time. However, the documents did not show how the requester for the $1.8 million arrived at the estimates. NASA officials told us that the reason for this was that the managers approving the directives trusted their employees to accurately calculate the estimate and maintain the support. In addition, our review of the documents indicated and NASA confirmed that quite a few of the estimates were based on undefinitized contract actions (UCA)—that is, unnegotiated contract changes. Under these actions, NASA officials can authorize work to begin before NASA and the contractor agree on a final estimated cost and fee. As we have stated in our high-risk series, relying on unnegotiated changes is risky because it increases the potential for unanticipated cost growth. This, in turn, may force the agency to divert scarce budget resources intended for other important programs. As of July 31, 2004, NASA records showed 17 UCAs related to return to flight with not-to-exceed amounts totaling $147.5 million. NASA’s estimate for the entire effort under these UCAs totals about $325 million, or 15 percent of NASA’s current $2.2 billion return to flight estimate. In June 2004, NASA established additional requirements for funding requests submitted to the PRCB. Under the new policy, an independent cost estimate must be developed for requests greater than $25 million, and a program-level cost evaluation must be completed for requests over $1 million. The program-level evaluation consists of a set of standard questions to document the rationale and background for cost-related questions. The responses to the questions are initially assessed by a cost analyst but are reviewed by the Space Shuttle Program Business Manager before submission to the PRCB. NASA provided us with two examples of requests falling under the new requirements. Both of the examples had better support than those with PRCB directives, but documentary support was still not apparent. For example, the funding request for a debris radar indicated that the estimate was based on a partnering agreement with the Navy and the Navy’s use of the technology. However, the program-level evaluation pointed out that no detailed cost backup was provided. The other example, which was a funding request to change the processes currently in place for the Space Shuttle Program’s problem reporting and corrective action system, was very well supported in terms of analysis, as the requester prepared detailed spreadsheets calculating the funding requirements according to a breakdown of the work to be performed, cited sources for labor rates, and provided assumptions underlying the calculations. However, as pointed out in the program evaluation of the request, there was no support provided for the estimate other than the initiator’s knowledge of the change. We believe that future compliance with NASA’s new policy establishing additional requirements for funding requests and the inclusion of documentary support could potentially result in more credible return to flight budget estimates. According to NASA, estimates for fiscal year 2005 and beyond will be refined as the Space Shuttle Program comes to closure on return to flight technical solutions and the return to flight plan is finalized. NASA expects that by late fall of 2004, a better understanding of the fiscal year 2005 financial situation will be developed. However, NASA cautions that the total cost of returning the shuttle to flight will remain uncertain until completion of the first shuttle missions to the space station, scheduled to begin in spring 2005. In written comments on a draft of this report, the NASA Deputy Administrator stated that the agency believes that both the estimate and methodology used in the calculation of costs for reinstating the Hubble Space Telescope servicing mission are sound and accurate, given the level of definition at this point in time. Notwithstanding that belief, the agency agreed that portions of the servicing mission activities lacked the design maturity required to estimate the costs according to accepted and established NASA procedures. Specifically, NASA agrees that the Hubble Space Telescope work breakdown structure was not constructed to collect program costs. At the same time, NASA believes it is erroneous to suggest that NASA has no valid basis for the numbers provided, citing the “Servicing Mission 4 Resources Management Plan,” which describes the effort required for completion of a servicing mission. According to NASA, although the program’s accounting system does not capture sustaining engineering costs in GAO’s preferred format, the Servicing Mission 4 Resources Management Plan details mission schedules and staffing, and applying contractor and civil service rates to that staffing level can accurately reflect the effort required to execute a servicing mission. We requested this type of analysis and documentary support, but NASA representatives did not offer such a calculation. Rather, the officials stated that the sustaining engineering costs were based on management’s assessment of contractor financial data and in-house service pool charges and that these activities could not be traced back to source documentation. Without adequate supporting data, we cannot assess the accuracy and reliability of such information. NASA acknowledged that the agency does not have a technical design from which to derive the cost for the on-orbit inspection and repair of the shuttle independent of support from the International Space Station. In the case of the unsupported cost estimate for delaying the phase out of the space shuttle in order to complete a manned Hubble servicing mission, NASA stated that it used approved budget projections for the operating years affected by the insertion of the Hubble servicing mission and prorated the extension of the service life. According to NASA, a range was added to the estimate to account for uncertainties and retention of critical skills. The estimates were presented as a rough order of magnitude. NASA stated that it provided its assumptions to demonstrate the reasonableness of the estimates. Nevertheless, in spite of the uncertainties in the estimate, which we recognized in our report, NASA guidance states that cost estimates should be documented during the entire cost estimating process and that the documentation should provide sufficient information on how an estimate was developed to be reproducible by independent cost analysts. NASA did not provide us with this type of documentation. Without adequate supporting data, we cannot assess the accuracy and reliability of such information. We do not agree that the use of approved budget projections is a reliable cost estimating methodology, particularly given the long-term budget implications of the extension of the space shuttle’s service life. NASA believes that the examples it provided of the actions to implement several of the CAIB recommendations attest to the rigor of the process and approved procedures NASA utilized to validate the costs. According to NASA, the estimates will mature as the technical solutions mature, but the estimates were not refined at the time of our review. The agency believes the outstanding technical issues necessary to return to flight are beginning to be resolved. However, the examples that NASA provided were in support of estimates that the agency considers mature. We requested support for high dollar portions of NASA’s estimate, which the agency did not provide. However, NASA selectively provided examples of what it considered to be mature estimates. We reviewed the examples but found that most of them contained insufficient documentation to assess the reliability of the estimates. In many cases, there were no documents in the approval packages to support the estimates, and in cases where there were documents, they generally provided high-level estimates with little detail and no documentation to show how NASA arrived at the estimates. We believe that because of its difficulty providing reliable cost estimates, NASA cannot provide the Congress assurance that its budget request for the shuttle program for fiscal year 2006 will be sufficient and that shortfalls would not need to be met through reductions in other NASA programs. NASA stated that it believes the use of UCAs is both reasonable and necessary for return to flight activities. We agree that UCAs may be justified to facilitate work outside the scope of existing contracts to expedite the return to flight activities. However, the use of UCAs appears to be a growing trend and is a risky contract management practice because it increases the potential for unanticipated cost growth. In the past, we cited the agency’s use of UCAs as one of the reasons we retained contract management as a high-risk designation for NASA to focus management attention on problem areas that involve substantial resources. Finally, NASA agrees that cost estimates for significant development activities should be appropriately documented. According to NASA, additional requirements for cost estimates and internal controls recently established by the program represent a step in ensuring the appropriate documentation is developed as solutions are identified. As stated in our report, we believe that future compliance with this new policy could potentially result in more credible budget estimates. In a broader context, reliable and supportable cost estimating processes are important tools for managing programs. Without this knowledge, a program’s estimated cost could be understated and thereby subject to underfunding and cost overruns, putting programs at risk of being reduced in scope or requiring additional funding to meet their objectives. Further, without adequate financial and nonfinancial data, programs cannot easily track an acquisition’s progress and assess actions to be taken before it incurs significant cost increases and schedule delays. As agreed with your offices, unless you announce its contents earlier, we will not distribute this report further until 30 days from its date. At that time, we will send copies to the NASA Administrator and interested congressional committees. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or [email protected]. Key contributors to this report are acknowledged in appendix III. To assess the basis for NASA’s Hubble servicing mission cost estimate, we analyzed NASA’s estimate of the funding needed for a shuttle servicing mission and supporting documentation, and we reviewed NASA documents explaining the rationale for the decision and identifying alternatives to shuttle servicing. We interviewed program and project officials to clarify our understanding of the available cost information and NASA’s rationale for the decision. To test the sufficiency of the support for the estimates provided by NASA, we requested the analytical basis and documentary support for selected portions of the estimates, primarily those with large dollar values. In addition, we compared NASA’s decision- making process with relevant Office of Management and Budget and NASA guidance on information and analyses recommended to enable decision- makers to select the best alternative. To determine the basis for NASA’s cost estimate for implementing all of the CAIB recommendations, we reviewed the CAIB report (volume 1), NASA’s return to flight implementation plan and budget estimates, and agency documentation discussing the return to flight budget estimate. We interviewed program officials to obtain a better understanding of NASA’s plans for returning the space shuttle to flight, the status of that effort, and the estimated cost. To test the sufficiency of the support for NASA’s return to flight estimate, we requested the analytical basis and documentary support for selected high dollar portions of the estimate. To accomplish our work, we visited NASA Headquarters, Washington, D.C.; and Goddard Space Flight Center, Maryland. We performed our review from March through September 2004 in accordance with generally accepted government auditing standards. Staff making key contributions to this report were Jerry Herley, Erin Schoening, Karen Sloan, and Jonathan Watkins.
Hubble's continued operation has been dependent on manned servicing missions using the National Aeronautics and Space Administration's (NASA) shuttle fleet. The fleet was grounded in early 2003 following the loss of the Space Shuttle Columbia, as NASA focused its efforts on responding to recommendations made by the Columbia Accident Investigation Board (CAIB). In January 2004, NASA announced its decision to cancel the final planned Hubble servicing mission, primarily because of safety concerns. Without some type of servicing mission, NASA anticipates that Hubble will cease to support scientific investigations by the end of the decade. NASA's decision not to service the Hubble prompted debate about potential alternatives to prolong Hubble's mission and the respective costs of these alternatives. This report addresses the basis of NASA's cost estimates to (1) service Hubble using the shuttle and (2) implement recommendations made by the CAIB. GAO is continuing its work on the Congressional request that GAO examine the potential cost of a robotic servicing mission to the Hubble Telescope. Although a shuttle servicing mission is one of the options for servicing the Hubble Space Telescope, to date, NASA does not have a definitive estimate of the potential cost. At our request, NASA prepared an estimate of the funding needed for a shuttle servicing mission to the Hubble. NASA estimates the cost at between $1.7 billion to $2.4 billion. However, documentary support for portions of the estimate is insufficient. For example, NASA officials told us that the Hubble project's sustaining engineering costs run $9 to 10 million per month, but they were unable to produce a calculation or documents to support the estimate because they do not track these costs by servicing mission. Additionally, the agency has acknowledged that many uncertainties, such as the lack of a design solution for autonomous inspection and repair of the shuttle, could change the estimate. A t the same time, NASA has yet to develop a definitive cost estimate for implementing all of the CAIB's recommendations but has developed a budget estimate for safely returning the shuttle to flight--a subset of activities recommended by the CAIB as needed to return the shuttle to full operations. NASA currently estimates return to flight costs will exceed $2 billion, but that estimate will likely be refined as the agency continues to define technical concepts. NASA provided support for portions of the estimate, but we found the support to be insufficient--either because key documents were missing or the estimates lacked sufficient detail. Further, NASA cautions that return to flight costs will remain uncertain until the first return to flight shuttle mission, which is scheduled to go to the International Space Station in spring 2005.
Financial literacy has been defined as the ability to use knowledge and skills to manage financial resources effectively for a lifetime of financial well-being. To make sound financial decisions, individuals need to be equipped not only with a basic level of financial knowledge but also with the skills to apply that knowledge to financial decision making. Thus, financial literacy encompasses both financial education—the process of improving consumers’ understanding of financial products, services, and concepts—as well as consumers’ behavior as it relates to their ability to make informed judgments. In the United States, a number of trends have emerged in recent years that underscore the importance of financial literacy. For example, investment options and credit products have grown in number and complexity. In addition, consumers are assuming greater responsibility for their own retirement savings, with traditional defined- benefit retirement plans becoming increasingly rare. Evidence suggests that many U.S. consumers could benefit from improved financial literacy. In a 2010 survey of U.S. consumers prepared for the National Foundation for Credit Counseling, a majority of consumers reported they did not have a budget, and about one-third were not saving for retirement. In a 2009 survey of U.S. consumers by the FINRA Investor Education Foundation, a majority believed themselves to be good at dealing with day-to-day financial matters, but the survey also revealed that many had engaged in financial behaviors that generated unnecessary expenses and fees and had difficulty with basic interest and other financial calculations. A wide variety of organizations provide financial education resources, including nonprofit community-based organizations, consumer advocacy organizations, financial services companies, trade associations, employers, and local, state, and federal government entities. Some financial literacy initiatives are aimed at the general population, while others target certain audiences, such as low-income individuals, military personnel, high school students, seniors, or homeowners. Similarly, some financial literacy initiatives cover a broad array of concepts and financial topics, while others target specific topics, such as managing credit, investing, purchasing a home, saving for retirement, or avoiding fraudulent or abusive practices. Efforts to improve financial literacy can take many forms. These can include one-on-one counseling; curricula taught in a classroom setting; workshops or information sessions; print materials, such as brochures and pamphlets; and mass media campaigns that can include advertisements in magazines and newspapers or on television, radio, or billboards. Many entities use the Internet to provide financial education, which can include information and training materials, practical tools such as budget worksheets and loan and retirement calculators, and interactive financial games. Youth-focused financial education programs are generally tied to a school curriculum. In 2009, 13 states had requirements for a course in personal finance education prior to high school graduation, and 34 states required personal finance education standards to be implemented to some extent in the curriculum, according to a survey by the Council for Economic Education. In 2009, more than 20 federal agencies had initiatives related to improving financial literacy. In some cases, federal agencies develop and provide financial education directly. For example, FDIC has developed and disseminated Money Smart, a comprehensive financial education curriculum, and the Federal Trade Commission has developed numerous brochures and Web resources on topics such as credit products, identity theft, and fraudulent schemes. In other cases, federal agencies provide grants or other support to nongovernmental organizations that provide the direct financial education. For example, in fiscal years 2009 and 2010, Treasury’s Financial Education and Counseling Pilot Program provided grants to eligible community and other organizations to provide financial education and counseling services to prospective homebuyers. The multiagency Financial Literacy and Education Commission, which was created in 2003, was charged with, among other things, developing a national strategy to promote financial literacy and education, coordinating federal efforts, and identifying areas of overlap and duplication. The commission is chaired by the Secretary of the Treasury and Treasury’s Office of Financial Education and Financial Access provides its primary staff support. In addition, the Dodd-Frank Act required the establishment of an Office of Financial Education within the Consumer Financial Protection Bureau, and the director of the bureau will serve as Vice Chair of the Financial Literacy and Education Commission. The federal government does not generally certify or approve financial literacy providers or regulate the content of the services they provide, except in certain instances. For example, the Bankruptcy Code requires individuals to receive budget and credit counseling from an approved provider before filing a petition for bankruptcy and also requires bankruptcy petitioners to complete an instructional course on personal financial management in order to have their debts discharged. As such, the Department of Justice’s U.S. Trustee Program approves providers who meet certain criteria to provide these services. In addition, HUD approves housing counseling agencies to provide certain services and awards competitive grants to approved agencies to fund those services. While there is a fairly extensive literature on financial literacy, relatively few evaluations of financial literacy programs have been published that use empirical evidence and even fewer evaluations measured a program’s impact on the participants’ behavior. One reason for this may be that the field of financial literacy is relatively new and many programs have not been in place long enough to allow for a long-term study of their effectiveness; for example, many of the key federal financial literacy initiatives were created only within the past 10 years. In the view of some experts and practitioners in the field of financial literacy with whom we spoke, the approaches that are most effective in meaningfully improving consumers’ financial behavior are not fully known. After conducting a literature search, we identified 142 papers published since 2000 that addressed the value or effectiveness of financial literacy and were authored by individuals or organizations that appeared to have significant experience or expertise in the field. We focused our review on 29 studies we identified among this group that met four additional criteria. First, they evaluated the outcomes of a specific program, approach, or policy. Second, they used empirical evidence—that is, they used data rather than anecdotal evidence. Third, they were based on original data collection rather than reviews of existing literature. Finally, they were determined to be sufficiently reliable and methodologically rigorous for inclusion in our review. The evaluations of financial literacy programs that are most reliable, useful, and definitive include three key elements, according to some experts with whom we spoke and literature that we reviewed: they measure behavioral change, track participants over time, and use a control group. The extent to which the studies we reviewed incorporated these elements varied: Measure behavioral change: Of the 29 studies we reviewed that evaluated the effectiveness of a financial literacy program or initiative, 22 measured, among other things, its impact on the participants’ behavior. The remaining seven studies did not measure the program’s impact on behavior but instead measured outcomes such as improvements in knowledge, attitude, or anticipated behavior. In general, the ultimate goal of financial education is to favorably affect consumer behavior, such as to promote improved saving and spending habits, wise use of credit, and avoidance of fraudulent or disadvantageous financial products. A financial education program may be of limited effectiveness if, for example, it increases participants’ knowledge of retirement savings issues but does not actually affect, on average, participants’ behavior through increased retirement contributions or other measures. Track participants over time (longitudinal): Eighteen of the 29 evaluations we reviewed were longitudinal—they involved the repeated examination of the study participants over time. Longitudinal studies of financial education programs can be important because these programs often seek to affect long-term outcomes, such as improved credit scores or increased retirement savings, that may occur several months or years after the end of the program. For example, a financial education program that seeks to increase homeownership would, ideally, track whether participants had become successful homeowners over a period of many years. Involve a control group: Seven of the 29 evaluations we reviewed used a control group—that is, the evaluation measured participants in the financial education program against a comparison group that did not participate in the program. Use of a control group helps to isolate the impact of a financial education program from other influences, such as changes in the overall economy, and provides a baseline against which to compare the program’s effect. It also can help avoid selection bias because individuals who choose to participate in a financial education program may be those who are most interested and motivated to change or who place a greater value on their future. Experts in financial literacy and program evaluation have cited many significant challenges to conducting rigorous and definitive evaluations of financial literacy programs that include these elements. For example, measuring a change in participant behavior is much more difficult than measuring a gain in knowledge, which can often be captured through a simple post-course test. Measuring behavior often relies on self-reported information, which can be inaccurate, or may require tracking credit scores, account balances, or other data that may be proprietary. Moreover, many organizations lack the financial resources or expertise to conduct program evaluation, particularly long-term evaluation involving a control group, which can be especially time and labor intensive. This is often the case when evaluations require tracking populations that are more transient in nature, such as college-aged individuals. In addition, because many variables can affect consumer behavior and decision making, ascribing long-term changes to a particular program is difficult. Moreover, some of the evaluation literature we reviewed noted that longitudinal studies using a control group and measuring behavioral change cannot be practically or realistically applied to all programs. Consequently, many evaluations rely on other measures that are less complex and less resource intensive to measure, such as knowledge gains, changes in attitudes, or outputs. One academic review of financial literacy evaluations found that the majority of financial education programs it reviewed only measured program outputs, such as the number of individuals served or the volume of materials distributed. The 2008 National Research Symposium on Financial Literacy and Education noted that one challenge in developing and implementing successful program evaluation for financial education is the field’s variety of core content, delivery methods, and target populations, as well as differences in the goals and objectives of specific programs. Therefore, identifying a common set of reliable methods and measures that can be used to make broad-based comparisons across programs can be difficult. For example, the appropriate evaluation for a media campaign that seeks broadly to increase consumer awareness may be very different from the evaluation of an individualized counseling program. The 29 evaluations of financial education programs we reviewed showed that some programs are effective in changing consumer behavior or otherwise demonstrating positive outcomes. For example, certain programs using approaches as diverse as individualized one-on-one credit counseling, employer-provided retirement seminars, and education provided in a classroom setting have each been shown to have effective outcomes. However, the heterogeneity among the programs evaluated and the nature of the evaluations themselves make generalizing or drawing conclusions about exactly which methods and strategies are most effective in improving financial literacy difficult. In addition, the studies we reviewed did not always have consistent results. For example, studies examining the effectiveness of state-mandated financial education have sometimes had conflicting conclusions. As a result, it appears that no single approach, delivery mechanism, or technology necessarily constitutes the best practice for improving financial literacy. Results of the studies we reviewed show that individual financial literacy programs have had positive results. Further, some of these programs have had a positive impact on participants’ financial behavior and not just on their knowledge. Of the 29 studies we identified as meeting our criteria, 15 evaluated classroom-based initiatives aimed at young people, 8 evaluated classroom-based initiatives aimed at adults, and 6 evaluated other delivery mechanisms, including one-on-one counseling and content offered via the Internet, newsletters, and video. In addition, two of the studies assessed financial literacy programs operated by the federal government: FDIC’s Money Smart and the U.S. Army’s Personal Financial Management Training. (Additional information on the 29 studies that we focused on is in app. II.) We identified 15 studies that evaluated the effectiveness of classroom- based programs or curricula designed to improve financial literacy among elementary, high school, or college students. Generally, these studies found that classroom curricula on general financial education, which covered topics such as spending, saving, and budgeting, increased students’ knowledge of these topics. Ten of the 15 studies also assessed the impact of a program on students’ subsequent behavior and found mixed results. Examples of studies that address youth classroom education include the following: The National Endowment for Financial Education’s High School Financial Planning Program, a high school curriculum on basic financial planning concepts, was evaluated in 2003-2004 by independent academic researchers. The study found that students who participated in the program experienced significant improvement in their financial knowledge, behavior, and confidence by the end of the course. In addition, about 60 percent of participants had positively changed their spending and savings patterns 3 months after the program had ended. In 2008, an outside research firm assessed Junior Achievement’s Finance Park, a 6-week economics education program designed for middle school students that combined classroom instruction with a daylong role-playing exercise. Using surveys conducted before and after students had participated in the program, the study found statistically significant improvement in students’ content knowledge, such as their ability to develop a personal budget. It also found that their confidence in monetary decisions and ability to be successful had increased. A 2007 study by researchers at Ohio State University used a Web-based survey of university alumni to investigate the impact of personal finance education delivered in high school and college. The study found that participating in a high-school or college-level personal finance course did not result in improvements in savings rates among participants. Individuals who had participated in a college-level personal finance course were found to have higher levels of knowledge about investment issues, although no such effect was found for individuals who had taken a personal finance course in high school. In addition, we identified four studies that attempted to assess the effect of legislative mandates that exist in certain states requiring school districts to include personal finance instruction in middle school or high school curricula. As noted earlier, as of 2009, 13 states required students to take a personal finance course as a high school graduation requirement. Three of the studies we reviewed reported that students in states that mandated financial education were more likely to have greater financial knowledge or better financial behaviors, such as increased rates of saving. For example, a 2001 study used a national survey to determine the long-term behavioral effects of high school financial curriculum mandates. The study found that respondents who graduated when state-mandated financial education was in effect had higher saving and wealth accumulation rates than those respondents who had graduated prior to such a mandate. In contrast, a study conducted in 2009 by researchers at Harvard Business School came to a different conclusion. Reviewing data from three U.S. Censuses, the researchers found that individuals whose curriculum included state-mandated financial education had saving rates identical to those of students in the same state who graduated prior to the state mandate. However, there are limitations to the methodologies used to assess the effect of legislative mandates. For example, some of these studies rely on proxy measures, such as when the participant likely graduated from high school, to determine whether the person participated in a mandated financial education program. Further, these studies do not typically discern the impact of the mandate from other important factors, such as changes in the overall economy, that affect financial behaviors. We identified eight studies that reviewed the effectiveness of classroom- based programs or curricula designed to improve financial literacy among adults. Some of these programs provided general financial education and others focused on particular topics, such as preparing for retirement. In addition, some of the programs were aimed at a general population, while others targeted specific populations, such as service members or individuals with low incomes or substantial debt. With some exceptions, programs reviewed were found to be effective in improving financial knowledge and behaviors, particularly among participants with the least education or who faced significant financial challenges. Examples of these studies include the following: A 2007 study conducted by FDIC evaluated Money Smart, a comprehensive financial education curriculum designed to help low- and moderate-income individuals enhance their financial skills and create positive banking relationships. The study surveyed individuals prior and subsequent to their participation in the program and followed up by telephone 6 to 12 months after their final class. The study found that participants in the Money Smart training were more likely to engage in positive behaviors after completing the course, including opening deposit accounts, saving money in a mainstream financial institution, and adhering to a budget. Researchers studied the effect of a 2-day financial education course taught to soldiers by college instructors. Soldiers who finished the course completed a follow-up survey of financial behaviors and the results were compared to those of a control group of soldiers who had not taken the course. Soldiers who had taken the financial education course were more likely to have engaged in positive behaviors, such as comparison shopping, saving, and paying bills on time. However, when the researchers controlled for other factors, only two sets of behaviors were associated with the financial education course. First, those soldiers who had the financial education course were more likely to know the difference between discretionary and non-discretionary spending. Second, contrary to what might be expected, those soldiers who had taken the course were less likely than the comparison group to report using a formal spending plan and more likely to report using an informal spending plan. Six of the studies we reviewed evaluated financial literacy initiatives that were not delivered in a classroom setting. These studies included assessments of credit counseling and housing counseling delivered one- on-one, counseling provided via the Internet, and content delivered through newsletters or on video. In general, these studies suggest that a variety of different delivery mechanisms can be effective in improving financial literacy. Examples include the following: A 2011 study compared outcomes for individuals who received face-to- face credit counseling with similarly situated consumers who opted for counseling via technological methods, such as telephone or Internet. Counseling outcomes were measured using data from participants’ credit reports 1 or more years following the original counseling. Delivery of credit counseling via the telephone or Internet was found to generate outcomes no worse than—and in some cases better than—face-to-face delivery of counseling services. A study conducted by researchers from Freddie Mac in 2001 compared the loan performance over time of homebuyers who received pre-purchase homeownership counseling with participants in the loan program who did not receive such counseling. Those borrowers who received one-on-one counseling were less likely to have a 60-day delinquency on their loans during the study period than other borrowers with equivalent characteristics who had not had counseling. However, borrowers who received counseling via the telephone or through a course of home study showed no reduction in delinquency. Increasingly, technological resources are being used to provide and evaluate financial literacy. In particular, the Internet has proved to be an important tool for disseminating information and education about financial issues to consumers, and one study found that the number of Web sites that provided financial education almost doubled between 2000 and 2005. Some organizations have used interactive video games to provide financial education, particularly for youth. For example, Junior Achievement has developed an online version of its Finance Park simulation to complement its traditional in-person interactive model. Technology can also be used to evaluate program effectiveness. A panel of experts convened by the New America Foundation in 2008 noted that online tools, such as interactive Web tools that allow students to set and measure their progress towards financial goals, can be used to collect data to assess the behavioral impact of a financial education program. These online tools provide flexibility to capture a number of measures on an ongoing basis for a large population. The Financial Literacy and Education Commission and many federal agencies have recognized the need for a better understanding of which programs are most effective in improving financial literacy. For example, the commission’s original national strategy in 2006 noted that more research and program evaluation were needed so that organizations are able to validate or improve their efforts and measure the impact of their work. In response, in October 2008, the Department of the Treasury and the Department of Agriculture convened, on behalf of the commission, the National Research Symposium on Financial Literacy and Education, which discussed academic research priorities related to financial literacy. The commission’s new 2011 national strategy sets as one of its four goals to “identify, enhance, and share effective practices.” The new strategy sets objectives for reaching this goal, which include encouraging research on financial literacy strategies that affect consumer behavior, establishing a clearinghouse for evidence-based research and evaluation studies, developing and disseminating tools and strategies to encourage and support program evaluation, and forming a network for sharing research and best practices. At the same time, because of fiscal constraints, the overall level of future federal resources that will be devoted to financial literacy research and evaluation is unclear. For example, the Social Security Administration requested no funding in its fiscal year 2012 budget justification for its Financial Literacy Research Consortium, which provides research grants to improve financial literacy and retirement planning; the consortium had been funded at about $9.2 million in fiscal year 2010 and had estimated obligations of $10 million in fiscal year 2011. Despite limited empirical evidence on the effectiveness of financial literacy programs, experts and practitioners in the field of financial literacy generally have identified certain elements that they consider desirable in almost any financial literacy program. The views of these stakeholders are not necessarily based on concrete data but rather on anecdotal evidence, experience in the field, and a broader body of research on program design and behavioral economics. For example, in 2004, Treasury’s Office of Financial Education and Financial Access published a list of the elements of a successful financial education program, which was intended to guide financial education organizations in developing programs and strategies. Similarly, in 2005, the Organization for Economic Cooperation and Development issued a set of principles and good practices to help guide financial education and awareness programs. Some nongovernmental organizations have also developed recommended practices for financial literacy programs. For example, the Jump$tart Coalition for Personal Financial Literacy has developed best practices for personal finance education materials. Based on the guidelines of these organizations and our interviews with experts and practitioners, the following elements are considered desirable for successful financial literacy programs: Content that is relevant and timely. Financial literacy programs may be more effective if they are relevant to their target audience. For example, people need different kinds of financial information at different phases of their lives. College students may need to learn how to be prepared to enter the workforce, working adults may need information on managing credit and investing for retirement, and retirees may need information on managing their retirement funds. In our 2004 forum on financial literacy, experts noted that financial education is most effective when it comes at the right time—that is, at the “teachable moments” that occur when the information is applicable to events in a person’s life. Some experts have argued that financial education should be linked to specific products and programs—for example, embedded into government income support programs. Delivery methods that are appropriate for the audience or topic. While financial education programs can be delivered in a broad variety of formats, a program may be more effective if its delivery method is adapted so that it is appropriate to its target demographic, engaging to participants, and well-suited to the objectives of the program. A 2010 panel of experts convened by the National Endowment for Financial Education highlighted the importance of tailoring the delivery method for financial education to the audience and the program, noting that individuals possess varying levels of financial knowledge and that these differences need to be taken into account in program design. For example, many experts have said that youth programs can be more effective when they include a hands-on activity, such as a simulation, which can make the information more true- to-life and relevant to the participants. Similarly, research indicates that young adults may prefer to receive financial education through the Internet. Accessibility and cultural sensitivity. Programs should be accessible to the population they seek to serve. Many stakeholders noted the importance of offering education at times and locations that are convenient to the target audience. Further, the success of a program can depend on content that is understandable and culturally sensitive. As we have reported in the past, cultural differences can play a role in financial literacy and the conduct of financial affairs because different populations have dissimilar norms, attitudes, and experiences related to managing money. In addition, a report by the Lutheran Immigration and Refugee Service states that existing financial literacy and education materials often do not effectively serve some immigrant populations because they do not incorporate linguistic idioms and cultural values, such as gender roles and religious beliefs. Use of partnerships. Developing partnerships among organizations involved in delivering financial education can have several benefits, including making more efficient use of scarce resources, facilitating the sharing of best practices, and effectively reaching targeted populations. For example, when Freddie Mac was developing and implementing its CreditSmart program, which initially was geared toward the African- American community, it partnered with five historically black colleges and universities. Program representatives told us that using these trusted intermediaries contributed to the program’s effectiveness. In addition, partnerships can help connect appropriate content with an effective delivery mechanism. For example, financial institutions, which have expertise in money matters, sometimes provide financial education content to schools, which can serve as an efficient means of directing that content to students. Program evaluation: An evaluation component, ideally built into a financial literacy program, helps to determine whether programs are having a positive impact on participants’ attitudes, knowledge, or behaviors. Effective evaluation often depends on establishing specific goals and identifying performance measures that can be used to track progress toward meeting goals, according to stakeholders at Treasury and other organizations. As previously discussed, given the resources required for evaluation, the extent to which program impact can be tracked and measured may vary based on the nature and scope of the individual program. Trained and competent providers. As we have previously reported, teacher quality is an important school-level factor influencing student learning. However, a 2009 study sponsored by the National Endowment for Financial Education found that less than 20 percent of teachers and prospective teachers reported feeling very competent to teach the personal finance concepts surveyed, including money management and saving. To help offset this lack of subject matter expertise, guidelines from the Organization for Economic Co-operation and Development recommend that specific financial education materials and tools be provided to the teachers. The Jump$tart Coalition for Personal Financial Literacy has encouraged that financial education materials provided to teachers include a number of specific elements, including student learning objectives and assessment tools, background information, lesson plans, and activities. Sustainability. Programs should have the necessary resources for long- term sustainability and success. Treasury’s Office of Financial Education and Financial Access has noted that a successful financial literacy program should be developed for long-term success, as evidenced by characteristics such as continuing financial support, legislative backing, or integration into an established course of instruction. Financial education may not be the only approach—or necessarily always the best approach—for improving consumers’ financial behavior. As noted earlier, generally the goal of a financial literacy program is to improve a consumer’s financial behavior or produce positive outcomes, such as participation in a retirement savings plan, timely repayment of credit, or the opening of a deposit account in lieu of using a check-cashing service. One tool for achieving such outcomes is financial education. However, alternative strategies or mechanisms, sometimes in conjunction with financial education, have also been successful in improving financial behavior. Insights from behavioral economics, which blends economics with psychology, have been used to design strategies apart from education to assist consumers in reaching financial goals without compromising their ability to choose approaches or products. These strategies recognize the realities of human psychology, including procrastination and inertia, inability to stick to plans, difficulty in processing complex information, and the desire for conformity. Literature we reviewed indicated that strategies for improving consumer financial behavior or outcomes that were alternative or complementary to traditional financial education can be effective. Examples of such strategies include the following: Changing the default option. A default is the choice people make when they do not deliberately choose an alternative. Because people are prone to inertia and procrastination, the default option often becomes the most common choice when making financial decisions. For example, in recent years, some employers have adopted automatic enrollment policies for their defined contribution plans—retirement plans under which participants accumulate retirement savings in individual accounts, such as a 401(k) plan. Under automatic enrollment, workers are enrolled into the plan automatically, or by default, unless they explicitly choose to opt out. As we have previously reported, studies have shown this mechanism to be effective for increasing participation in retirement plans. For example, one study of employees hired before and after their company adopted automatic enrollment found that the retirement plan participation rate of those hired before automatic enrollment was 37 percent at 3 to 15 months of tenure, compared with 86 percent for the group hired after. Using commitment mechanisms. Strategies that commit people to specific actions in the future can be an effective way of influencing behavior. For example, a program called Save More Tomorrow asked employees to commit to increasing their retirement plan contribution rates well in advance of each scheduled pay increase. The program sought to use this commitment mechanism to help employees who would like to save more but lack the willpower to act on this desire. An evaluation of this program found that 78 percent of employees offered the program joined, and 80 percent of those who joined remained in the program for several pay raises, with their savings rate increasing, on average, by 10 percentage points over a period of 40 months. Using monetary incentives. Using incentives with tangible monetary benefits can also be effective in changing behavior. For example, studies have shown that employees are more likely to contribute to a retirement plan if their employer provides matching contributions, and the amount that an employee contributes to a plan can be influenced by the formula for the matching contribution. Research shows that programs that offer monetary matches can provide concrete rewards that encourage individuals to take specific actions. In one experiment, low- and middle- income clients of a tax return preparation firm were randomly offered a match of 0, 20, or 50 percent on their tax refunds that would be contributed to an individual retirement account. Higher matches, combined with information received from tax professionals, raised the participation rate in the savings plan and the amount of the contribution. Similarly, an experiment compared a random selection of eligible lower- income people who received individual development accounts—which provide a match for savings made for certain purposes—with a control group that was not offered these accounts. Four years into the program, the individual development accounts increased homeownership rates of prior renters by 7 to 11 percentage points relative to the control group. However, the study found that there was almost no impact on other targeted uses, such as post-secondary education or retirement savings. In addition, a follow-up study conducted 10 years after the start of the program found that the homeownership rates for those who did not receive access to the individual development accounts were similar to those who did, suggesting that the benefits diminished over time. Simplifying financial decisions. Reducing the complexity of financial information provided to consumers and simplifying the choices they need to make can motivate consumers to take action. A few studies have shown that more investment options are correlated with reduced participation in participant-directed retirement plans, possibly because of too many choices or information overload. Further, as we have noted in prior reports on Social Security information and credit card disclosures, certain practices help people understand complicated information, such as writing information in clear language, using straightforward layout and graphics, and making options easy to compare in a single document. In one experiment, newly hired staff at an orientation seminar randomly received either a standard packet of information on supplemental retirement accounts, an additional planning aid designed to simplify enrollment, or an even simpler planning brochure. Simpler planning information was associated with significantly higher participation rates in retirement accounts, with enrollments for the three groups of 7, 21, and 27 percent, respectively. Leveraging the peer effect. People are often more comfortable making a choice when they know that others in their peer group have made the same choice. Incorporating individuals’ tendency to want to follow their peers can help motivate consumers to take action. In an experiment conducted at a large university, a random sample of employees in certain departments were promised a monetary reward for attending a benefits fair that presented information about tax-deferred account retirement plans. Employees were more likely to attend the fair—and ultimately to participate in the retirement plan—if colleagues in their department received a monetary award, even if the employees themselves received no such award. Another study found that an effective tool for increasing participation in retirement accounts was to present videos encouraging participation that included fellow employees with certain characteristics similar to the target audience. Much of the literature and the experts we spoke with have noted that these various strategies to improve consumers’ financial behavior and subsequent outcomes should not be viewed as a substitute for financial education but rather as a complement to it. The most effective approach to improving consumers’ financial decision making and behavior may be to use a variety of these types of strategies in conjunction with financial education. If the federal government were to develop a process for approving or certifying financial literacy providers, a variety of approaches could be taken. At present, the federal government does not have a process for approving or certifying most organizations that provide financial education, with two notable exceptions. As previously mentioned, the U.S. Trustee Program approves credit counseling agencies and debtor education providers to meet requirements of the U.S. Bankruptcy Code. In June 2005, the Trustee Program established its Credit Counseling and Debtor Education Unit to implement new statutory provisions. Approximately 166 credit counseling agencies and 265 debtor education providers were approved by the Trustee Program as of March 2011. In addition, since 1968 HUD has had a process for approving housing counseling agencies through its Housing Counseling Program, and as of April 2011, there were 2,758 agencies participating in the program, of which HUD had approved 1,047. These agencies provide a variety of housing counseling services and are the only ones that can provide counseling to meet the mandatory counseling requirements of certain housing programs, such as the Federal Housing Administration’s Home Equity Conversion Mortgage Program. Some nongovernmental entities also have certification processes or confer designations that are related to financial literacy. For example, the Institute for Financial Literacy—a nonprofit organization that provides financial literacy information and services—has recently implemented an accreditation process for organizations that provide financial education, which will be based on standards it has developed. Some professional and trade organizations also confer designations—such as Certified Financial Educator—to individuals to indicate that certain examination, educational, or other requirements have been met. Some designations require a certification examination; an accredited degree, training, or relevant experience in the financial services industry; and continuing education. The existence of the Trustee Program’s and HUD’s approval processes for credit counseling and debtor education and housing counseling organizations, respectively, suggests that it would be feasible for the federal government to implement an approval or certification process that would encompass financial literacy providers more broadly. However, initiating and developing such a process would require that Congress or the relevant federal agency or agencies address a number of issues, including the goals of the program, who would administer the process, what type of providers it would cover, what criteria or standards would apply to providers, and what degree of ongoing oversight would be put in place: What are the goals of the certification process? As we have reported in the past, defining a program’s mission, strategic goals, and desired outcomes is critical. The scope, structure, and design of any certification process for financial literacy providers would depend on what it set out to achieve. For example, a certification process whose primary goal was to protect consumers from low-quality or unscrupulous providers might have different characteristics and design from a process whose primary goal was to promote public awareness of financial education. What entity would administer the certification? A federal agency could operate a certification process directly or, alternatively, it could oversee or charter a nongovernmental entity to do so. Some stakeholders in the field of financial literacy told us that if a federal entity were to take on this responsibility, the Department of the Treasury, the Financial Literacy and Education Commission, or the Consumer Financial Protection Bureau would be plausible candidates. One representative of a federal agency suggested that several federal agencies could be involved, certifying providers that cover the topics or address the target audience under each agency’s purview. Another model would be for the federal government to charter a nongovernmental intermediary that would implement the certification, with a federal agency overseeing that intermediary. This would be similar to HUD’s process of approving intermediary organizations that then oversee and provide subgrants to branches and affiliates that provide the actual counseling to consumers. For example, NeighborWorks America, a federally chartered nonprofit corporation with its own nationwide network, receives federal funds to provide grants, training, and technical assistance to agencies that provide housing counseling. As a HUD-approved intermediary organization, NeighborWorks must ensure that its affiliates meet the criteria for HUD approval and HUD does not approve each affiliate independently. What entities would be covered? A wide range of entities provide some form of financial education, including community-based organizations, large national nonprofits, trade and professional associations, credit counseling agencies, colleges and universities, credit unions, and private companies. Further, some of these entities provide broad financial education, while others focus on very specific topics. One step in developing a federal process for certifying financial literacy providers would be to determine the scope of the entities that would be eligible. Some stakeholders with whom we spoke noted that trying to encompass all types of financial literacy providers could be unrealistic. For example, applying consistent criteria and standards among programs using very different approaches and delivery mechanisms would be difficult. One representative of a federal agency suggested that there be separate certification processes based on the topics covered. The Institute for Financial Literacy, according to its representatives, has opted for a broader scope in developing its organizational accreditation, which is open to organizations that provide financial education either exclusively or as part of a wider range of services, in which case only the relevant activities are accredited. What criteria would be used? Criteria would need to be developed for determining the certification of financial literacy providers. These criteria could include financial soundness, governance structure, size, populations served, reputation, and nonprofit status, among others. Criteria could also address the expertise and capacity of providers, including years of experience and staff knowledge in economics and personal finance education. Some stakeholders told us that for-profit companies that market or sell financial products should be ineligible, presumably because they may not provide unbiased information or may be more likely to use financial education to help sell products. Along these lines, only nonprofit organizations and units of government are eligible to become HUD- approved housing counseling agencies. By contrast, the Bankruptcy Code does not require entities approved to fulfill the debtor education requirement to be nonprofits, although it does require approved credit counseling agencies to be nonprofits. Some bank representatives told us that, within their industry, many entities provide financial education as a legitimate community service and do not use it to market products. One federal agency noted that a code of ethics could also be included as part of the certification process to help address these issues. Should certification include content standards? One option for certification would be to require that certified providers include in their programs certain content standards, such as specific topics that must be covered, or to require that certain core competencies be addressed. Such standards could provide consistency and quality in the program content offered by certified financial literacy providers. For example, one financial literacy advocate told us that such standards would help teachers identify high-quality content for financial education incorporated into classroom instruction or after-school programs. The Trustee Program’s interim final rule on procedures and criteria for debtor education providers specifies the topics that must be covered in the personal financial management instructional course required of bankruptcy filers prior to discharge of their debt. HUD’s Housing Counseling Program Handbook states that HUD has the option of requiring, promoting, or incentivizing the adoption and implementation of housing counseling and education standards. However, HUD does not generally specify the content its approved housing counseling agencies must cover. An alternative to specific content standards would be to certify curricula or programs in lieu of providers. The certifying entity would need to assess those curricula periodically to determine that the information offered to consumers is accurate, up-to-date, and relevant. What level of oversight would be conducted? A federal process for certifying financial literacy providers would likely require some form of oversight to help ensure continued compliance with any statutory or program requirements. The level of oversight for certified entities could be fairly limited, such as a simple reporting requirement on activities performed. Alternatively, oversight could be more comprehensive and include such things as more detailed reporting requirements, complaint resolution, quality reviews, and administrative proceedings to remove entities when necessary. In addition, providers could be required to reapply regularly. For example, the Trustee Program requires approved credit counseling agencies and debtor education providers to reapply annually, and HUD assesses approved housing counseling agencies for reapproval at least every 3 years. Some representatives of federal agencies and organizations that provide or advocate for financial literacy cited potential benefits that could result from implementing a federal process for certifying financial literacy providers: Improve overall quality. A federal certification process could potentially improve the quality of organizations that chose to apply for certification and would need to meet a certain set of qualifications and standards. For example, Trustee Program officials told us that their approval process for financial education providers for the purposes of the bankruptcy process may have encouraged higher standards among those providers. In addition, a certification process could raise the quality of the financial education community overall. For example, HUD officials noted that their Housing Counseling Program has helped set a standard for the industry as a whole. Encourage greater program evaluation among providers. A certification process could help to increase program evaluation efforts by encouraging provider organizations to assess their ability to meet certification standards and by requiring certified providers to report on outcomes. Representatives from one financial literacy organization told us that organizations that are interested in continuous improvement could benefit from such a process. Help consumers identify competent providers. From the consumer standpoint, a federal certification of financial literacy providers could serve as a federal “stamp of approval.” Representatives from one trade association told us that certification could assist consumers and others in distinguishing among providers. Increase public awareness. A federal certification process could help draw public attention to the issue of financial literacy. Potentially, it could give providers additional visibility, which could raise the profile of financial literacy and encourage consumers to seek out these resources. Weed out poor quality providers. Federal certification could help to weed out poor quality or abusive financial literacy providers, according to a few stakeholders with whom we spoke, presumably because consumers might avoid providers that had not been certified. Aid in building capacity. Federal certification possibly could aid some financial literacy providers in garnering outside funding from foundations or other sources that they rely on for support. Recognition by a federal agency could provide legitimacy to nonprofit organizations that could help them leverage other resources. In addition, two financial literacy stakeholders suggested that the federal agency overseeing certification could serve as an information clearinghouse for providers. This could allow them to more readily access information on best practices, financial education resources, and the results of research on financial literacy issues. Certification might also provide networking opportunities among certified providers, who might share information and resources among themselves. A federal certification process for financial literacy providers would face certain challenges and potential downsides. Most notably, developing, implementing, and operating a federal process for certifying financial literacy providers would involve financial costs and staff resources for the federal agency administering the process. While each certification or approval process is unique, the experiences of the Trustee Program and HUD may offer insights into the potential resources that a broader certification process for financial literacy providers might entail. The Trustee Program spent $6.1 million between fiscal years 2005 and 2007 to develop its Credit Counseling and Debtor Education Unit, which was created in 2005 to administer the approval of credit counseling agencies and debtor education providers. In fiscal year 2010, the Trustee Program spent $1.6 million in salaries and benefits for the unit, according to agency officials. The number of full-time equivalent staff assigned to the unit between fiscal years 2007 and 2010 ranged from 13 to 18, with field staff assisting on a rotational basis. For fiscal year 2011, 11 full-time equivalent staff had been assigned to the unit. These staff have been responsible for developing application forms and procedures, approving and monitoring credit counseling agencies and debtor education providers, and taking steps to help ensure that filers were meeting requirements. Because approved entities must submit an application each year, staff review hundreds of applications and reapplications annually, according to agency officials. The officials told us that based on their experience, any federal government process requiring periodic review and enforcement would require substantial resources. In addition, the rulemaking process related to approving credit counseling agencies and debtor education providers has been lengthy. For example, the Trustee Program is still using the interim final rules it proposed in July 2006. While it issued proposed rules in 2008, as of May 2011, neither final rule had been approved. HUD has estimated that the cost of administering its Housing Counseling Program will be $18.8 million for fiscal year 2012, with the majority going toward salaries and benefits. This amount does not include the grants that HUD makes to some of those agencies. Estimates for prior years were not readily available, according to HUD, because until recently the cost of administering the housing counseling program was not segregated. Because responsibilities for the Housing Counseling Program are spread across the agency, HUD officials did not provide an exact number for full- time equivalent staff devoted to approving and overseeing housing counseling agencies. However, they estimated that approximately 200 staff members nationwide have significant responsibilities within the program. Those responsibilities include collecting and reviewing applications, processing reapprovals, monitoring approved agencies, and providing them with education and outreach. HUD staff conduct regular reviews of approved agencies—which can include conducting onsite visits—to determine if their performance meets program standards and requirements or to address risk-related issues. In December 2004, HUD first published proposed rules that set forth the eligibility requirements, performance standards, and administrative procedures required of approved housing counseling agencies. The final rule became effective in October 2007. HUD staff told us that the initial development of the approval process for housing counseling agencies was relatively resource-intensive. HUD’s handbook for the program provides guidance to its staff and to program participants, including the branches, affiliates, or sub-grantees of approved intermediaries. HUD also recently created standard operating procedures for staff to follow in conducting performance reviews. The Dodd-Frank Act established an Office of Housing Counseling within HUD, but federal budget constraints could delay its establishment and reduce the scale of HUD’s activities. As noted earlier, some financial literacy stakeholders suggested that if a federal certification process is to be implemented, the financial education offices of either Treasury or the Consumer Financial Protection Bureau could be among the appropriate choices to implement this process. According to a Treasury official, the Office of Financial Education and Financial Access within Treasury has an allocation of six full-time equivalent staff for fiscal year 2011. The level of staff needed to operate a program for certifying financial literacy providers would clearly depend on the specific scope and nature of the program, but current staffing levels at Treasury’s financial education office would likely be insufficient to take on such a responsibility. According to staff at the Consumer Financial Protection Bureau, its Office of Financial Education was still being staffed as of May 2011. While viewpoints varied, in general, a majority of the representatives of nonprofit and private sector financial literacy organizations, academic experts, and representatives of federal agencies with whom we spoke believed that the disadvantages of implementing a federal certification process for financial literacy providers outweighed the advantages. While such a process would be feasible, many stakeholders commented that it might not be the most productive use of the scarce federal resources available for financial literacy. In addition to the federal resources that would be required, several other challenges, disadvantages, and other factors were cited: There would be administrative costs for the entities being certified. Representatives of financial literacy organizations and others noted that applying for and maintaining federal certification would result in some administrative cost and burden for the participating organizations. Our review of public comments submitted in response to the Trustee Program’s 2008 proposed rules found that some participating organizations noted the administrative burden caused by the requirements for the credit counseling and debtor education approval process, and one organization noted that it dedicated more than 100 employee hours each year to complete its application. The resources needed for administrative requirements such as these could act as a barrier to participation in any certification process for certain financial literacy providers—particularly smaller, community-based organizations. Financial literacy providers are highly diverse. Financial literacy is a wide-ranging field covering many different types of organizations, topics, and delivery mechanisms. For example, financial education can be provided in one-on-one counseling, in a classroom setting, via the Internet, as a set of curricula, or via broadcast or print media. A single uniform certification process covering financial literacy providers as a whole may be impractical or inappropriate. Moreover, the varying nature of providers and programs could require that certification include multiple processes. Whether certification would improve provider quality is unclear. Several stakeholders with whom we spoke questioned whether a federal certification process for financial literacy providers would help distinguish between higher-quality and lower-quality providers. They also noted that some high-quality providers might not even apply for certification if the benefit was not clear to them or the administrative burden appeared significant. Further, one stakeholder raised concern that the criteria required for financial literacy providers to be certified would create a “floor” of basic qualifications rather than actually serve to promote high standards. As we reported in 2009, there were issues related to counseling provided by HUD-approved housing counseling agencies for HUD’s reverse mortgage program. We found that HUD’s internal controls did not provide reasonable assurance that counseling providers were complying with program counseling requirements and, as a result, some prospective borrowers may not have been receiving the information needed to make informed decisions about obtaining a reverse mortgage. Whether consumers would recognize or use the certification is unclear. Several stakeholders were skeptical that many consumers would select a financial literacy provider based on whether or not the provider had been federally certified. For example, staff at one federal agency noted that a certification process in and of itself would not necessarily result in greater consumer confidence in the advice they receive from certified providers. A certification process may not weed out bad actors. One potential goal of federal certification of financial literacy providers would be to help weed out unqualified or unscrupulous providers, but how certification would achieve that goal is not clear. Financial literacy certification may not be an appropriate role for the federal government. Several stakeholders questioned whether certifying financial literacy providers is an appropriate role for the federal government. In addition, staff at two federal agencies noted that the federal government should be prudent about certifying organizations because the certification could be misrepresented as an endorsement beyond what certification actually signified—that the organization met certain prescribed criteria. There is a lack of consensus on what is effective in improving financial literacy. As discussed earlier, the most effective ways of improving consumer financial literacy are still not fully known. Several financial literacy experts noted that there is not yet consensus or consistency within the field on specific standards or core concepts that financial literacy programs should include. As a result, certifying financial literacy providers may be premature. Some representatives of nonprofit and private sector financial literacy organizations, academic experts, and representatives of federal agencies with whom we spoke noted that there may be alternatives to a federal certification process that could still help achieve some of the same goals. For example, federal agencies could develop voluntary national standards or continue to promote core competencies and leading practices, such as those that have been identified by the Financial Literacy and Education Commission. Another potential option would be to require financial literacy provider organizations receiving federal funds to adhere to specific guidelines, which could address such areas as the information that organizations provide to consumers. Some stakeholders also noted that in lieu of a certification process, the federal government might promote provider competency more directly, such as by offering or funding additional training or technical assistance. We provided a draft of this report for review and comment to the Consumer Financial Protection Bureau, Department of Justice, FDIC, Federal Trade Commission, HUD, Securities and Exchange Commission, and Treasury. We incorporated technical comments from these agencies as appropriate. In addition, the Consumer Financial Protection Bureau provided a written response, which is reprinted in appendix III. The bureau noted the responsibilities it was given under the Dodd-Frank Act to promote financial education, with the overarching goal of improving consumers’ ability to make informed choices in the financial services marketplace. The bureau said it believed that before any decision to create a federal financial literacy certification program could be made there would need to be additional exploration of the program’s pros and cons, goals, potential methods, and alternatives. We are sending copies of this report to the appropriate congressional committees, Consumer Financial Protection Bureau, Department of Justice, FDIC, Federal Trade Commission, HUD, Securities and Exchange Commission, Treasury, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to examine (1) what is known about which methods and strategies are effective for improving financial literacy, and (2) the feasibility of a process for certifying financial literacy providers and the benefits and challenges of doing so. For the purposes of this report, financial literacy providers generally refers to organizations, rather than individuals, and excludes entities that provide individualized advice for compensation, such as investment advisers or financial planners. In addition, our examination of a potential certification process focused on a process that would be operated or overseen by the federal government. To address our first objective, we conducted a literature search to identify studies, reports, and articles related to the effectiveness of financial literacy and education efforts. We identified these documents through a search of ProQuest and ECO databases, which was augmented with a general Internet search based on key words to link financial literacy and education with effectiveness. We also asked for recommendations for papers from academic experts and from representatives of organizations that we interviewed, and we used the bibliographies of the studies we reviewed to identify additional studies. We categorized the identified studies based on their relevance to our objective and other characteristics. We limited our search to studies published since 2000 to help ensure that the material was still relevant. The focus of our search was on documents that addressed the effectiveness of financial literacy initiatives or programs and methods of evaluation; we generally excluded from our search documents that included only broader discussions of financial literacy or the extent to which consumers are financially literate. In addition, we reviewed papers that addressed the effectiveness of strategies for improving consumer behavior that are alternative to financial education and papers that addressed the application of behavioral economics to financial literacy and behavior. We limited our review to published works that were authored by academic researchers, think tanks, government agencies, or private or nonprofit organizations that we assessed to have a reasonable degree of experience or expertise in the field of financial literacy and education. We performed our searches from September 2010 to May 2011. In total, we reviewed 142 studies that were identified through this search. We then screened these studies to identify those that met the following additional criteria: (1) represented original research (as opposed to a review of existing research); (2) used empirical evidence—that is, used data rather than anecdotal information; (3) evaluated the outcomes of a specific program, approach, or policy; and (4) were determined by a GAO methodologist to be sufficiently relevant and methodologically rigorous for inclusion in our report. While we attempted to be thorough in our search methods, the 29 studies that met these criteria may not reflect all published studies that exist and meet these criteria, and do not reflect any studies that may exist that were unpublished or were not readily accessible. Of these 29 studies, 12 were published in peer-reviewed journals. In addition to these studies, we reviewed other studies and papers that addressed strategies for improving financial literacy that are separate from financial education (such as changes in retirement default options) that we deemed sufficiently reliable for our work because they were published in peer-reviewed academic journals, written by noted experts in financial literacy, or widely cited in the field of financial literacy and education. We also conducted interviews with—and obtained documentation as applicable from—representatives of federal agencies whose missions involve consumer education and protection, including the Consumer Financial Protection Bureau, Federal Deposit Insurance Corporation, Board of Governors of the Federal Reserve System, Federal Trade Commission, Department of the Treasury, and Securities and Exchange Commission; the Financial Industry Regulatory Authority; nonprofit organizations that provide or advocate for financial literacy and education, including AARP, American Association of Family & Consumer Sciences, Consumer Federation of America, Employee Benefit Research Institute, Institute for Financial Literacy, Jump$tart Coalition for Personal Financial Literacy, Junior Achievement, National Endowment for Financial Education, National Foundation for Credit Counseling, and New America Foundation; one international organization, the Organization for Economic Co-operation and Development; and one financial services company, Freddie Mac. In addition, we held interviews with representatives of the American Bankers Association and the Credit Union National Association, and we also held group interviews with representatives of individual community banks and credit unions that are members of those entities. We also interviewed six academic researchers who focus on financial literacy. To address our second objective, we conducted an Internet search for articles, studies, or position papers related to the feasibility of a process for certifying financial literacy providers. In addition, we solicited views on the feasibility of such a process from the representatives of federal agencies, nonprofit organizations that educate or represent consumers, financial institutions, and other organizations cited above, as well as the six academic experts with whom we spoke. Using a semi-structured interview approach, we gathered their views on the potential advantages, disadvantages, and challenges of a certification program, as well as options for how it might be structured and implemented, which federal entity might be responsible for it, and how it might be overseen. We also reviewed documentation from and interviewed representatives of two nonprofit organizations, the Institute for Financial Literacy and the American Association of Family & Consumer Sciences, both of which have developed programs for certifying individuals or organizations that provide financial education. In addition, for illustrative purposes, we gathered information on two existing processes within the federal government for approving organizations that provide some form of financial education. These were the processes conducted by (1) the Department of Justice’s U.S. Trustee Program for approving credit counseling agencies and debtor education providers to meet certain requirements of the Bankruptcy Code, and (2) the Department of Housing and Urban Development (HUD) for approving housing counseling agencies under the Housing Counseling Program. We reviewed relevant documents related to these processes, including application forms, final and proposed rules, and program handbooks and guidance. In addition, we requested from the Trustee Program and HUD their estimated expenditures and staffing levels in fiscal years 2010 and 2011 related to the approval and oversight of providers under their respective programs, and data on the number of providers participating in their credit counseling and debtor education and housing counseling programs, respectively. We also obtained information from the Department of the Treasury and the Consumer Financial Protection Bureau on staffing levels for their financial education offices. We interviewed agency staff with program responsibility and discussed their methods for compiling the data, and we determined that these data were sufficiently reliable for our reporting purposes. For the Trustee Program, we also used information that we had collected for a prior report on the costs associated with their credit counseling and debtor education program during fiscal years 2005 through 2007. We also interviewed representatives of the Trustee Program and HUD to learn of their agencies’ experiences in developing and implementing their approval processes, and to gather their views on the benefits and challenges that might be faced if a federal entity were to undertake an approval or certification process for a broader class of financial literacy providers. We conducted this performance audit from September 2010 to June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix includes studies of evaluations of financial literacy programs that met our criteria for inclusion in our in-depth review of selected studies. Table 1 provides an overview of the 29 studies, their authors, type of program covered, program or approach evaluated, evaluation method, and key findings. In addition to the contact named above, Jason Bromberg (Assistant Director), Bernice Benta, Tania Calhoun, Daniel Newman, Jennifer Schwartz, Andrew Stavisky, and Seyda Wentworth made key contributions to this report.
Financial literacy plays an important role in helping ensure the financial health and stability of individuals and families, and efforts to improve consumers' financial literacy have grown in recent years. Currently, hundreds of nonprofit, private, and governmental entities provide some form of financial education to Americans. The federal government does not certify or approve organizations in general that provide financial literacy, although the U.S. Trustee Program and the Department of Housing and Urban Development (HUD) have approval processes for financial literacy providers for the purposes of meeting requirements of, respectively, the bankruptcy process and certain housing programs. In response to a mandate in the Dodd-Frank Wall Street Reform and Consumer Protection Act, this report addresses (1) what is known about which methods and strategies are effective for improving financial literacy, and (2) the feasibility of a process for certifying financial literacy providers. To address these objectives, GAO reviewed relevant literature, focusing on evidence-based evaluations of financial literacy programs or approaches; conducted interviews in the federal, nonprofit, private, and academic sectors; and examined the lessons learned from the approval processes of the Trustee Program and HUD. Relatively few evidence-based evaluations of financial literacy programs have been conducted, limiting what is known about which specific methods and strategies are most effective. Financial literacy program evaluations are most reliable and definitive when they track participants over time, include a control group, and measure the program's impact on consumers' behavior. However, such evaluations are typically expensive, time-consuming, and methodologically challenging. GAO's review of 29 evidence-based studies evaluating specific programs or approaches indicates that several have been effective in changing consumer knowledge or behavior. For example, several of these studies showed that individualized one-on-one credit counseling, employer-provided retirement seminars, and education provided in a classroom setting have had effective outcomes. However, the diversity of these programs and their evaluation methods makes drawing generalizable conclusions difficult. As a result, it appears that no one approach, delivery mechanism, or technology constitutes best practice, but there is some consensus on key common elements for successful financial education programs, such as timely and relevant content, accessibility, cultural sensitivity, and an evaluation component. In addition, several mechanisms and strategies other than financial education have also been shown to be effective in improving consumers' financial behavior, including financial incentives or changing default options, such as through automatic enrollment in employer retirement plans. The most effective approach may involve a mix of financial education and these other strategies. While a federal process for certifying financial literacy providers appears to be feasible, doing so would pose challenges. Initiating and developing such a process would necessitate that Congress or federal agencies determine which entity would administer the certification, the types of providers that would be covered, the degree of oversight required, and other aspects of the process. Some financial literacy stakeholders with whom GAO spoke cited potential benefits to federal certification. For example, some noted that it might help improve the quality of financial education providers, help consumers identify competent providers, or create greater public awareness about financial education. However, as the experiences of the Trustee Program's and HUD's approval processes show, federal certification would require financial and staff resources for administering the process. Moreover, most financial literacy stakeholders with whom GAO spoke cited additional concerns, including the potential cost and administrative burden to certified entities, the challenge of creating a single process for certifying such a diverse field, and skepticism that certification would improve the quality of financial education providers. Further, the lack of consensus about which financial literacy strategies and approaches are most effective would make certification challenging.
The Department of Defense (DOD) is in the process of realigning and closing military installations. An initial major round of installation realignments and closures occurred in 1988, subsequent rounds followed in 1991 and 1993, and another round is scheduled for 1995. Congress has expressed concern that environmental cleanup issues related to past activities at these installations are significantly affecting DOD’s ability to transfer these properties to local communities. This report focuses on that issue; however, other factors—disagreements between federal agencies, local community interests, and others over reuse plans, as well as revised laws and regulations designed to improve the property disposition process—have also affected property transfers. We have reported separately on these issues for bases closed in the 1988 and 1991 roundsand are reviewing 1993 closing bases now. For decades, DOD activities and industrial facilities generated, stored, recycled, and disposed of hazardous waste, which often contaminated nearby soil and groundwater. In many instances, these problems predate existing environmental laws and regulations. Hazardous waste contamination can significantly contribute to serious illness or death or pose a hazard to the environment and is extremely expensive to clean up. Types of hazardous waste found at most DOD installations include solvents and corrosives; paint strippers and thinners; metals, such as lead, cadmium, and chromium; and unexploded ordnance. Contamination usually results from disposal, leaks, or spills. Cleanup goals and strategies are usually site specific and depend upon the cleanup standards, exposure potential, affected population, and nature and extent of contamination. All of these determine the threat to human health and the environment. Cleanup efforts at closing installations are carried out primarily by contractors. DOD gives the highest priority for cleanup to installations on the Environmental Protection Agency’s (EPA) National Priorities List (NPL), a register of the nation’s worst known hazardous waste sites, and to those scheduled to realign and close. The Defense Authorization Amendments and Base Closure and Realignment Act (P.L. 100-526), enacted on October 24, 1988, established a bipartisan commission to make recommendations to Congress and the Secretary of Defense on base closures and realignments and specified the conditions and authorities to implement these actions. The Defense Base Closure and Realignment Act of 1990 (Part A of title XXIX of P.L. 101-510) also created an independent commission that would meet during calendar years 1991, 1993, and 1995 to review additional installations DOD recommended for realignment and closure. DOD is carrying out the approved installation closures and realignments and is reviewing installations to recommend for realignment and closure for the 1995 round. Figure 1.1 summarizes DOD information on installations and activities designated for closure and realignment in 1988, 1991, and 1993. We have reported separately on the recommendations and processes for each of these rounds. Federal property that is no longer needed is not automatically sold. The Federal Property and Administrative Services Act of 1949 requires a screening process to determine if property can be transferred to another government or nonprofit agency. DOD first screens excess property for possible use by other DOD agencies and then by other federal agencies. If no federal agency needs the property, it is declared surplus to the federal government and is made available to nonfederal parties, including state agencies, local agencies, agencies caring for homeless people, public benefit agencies, or the general public. Also, federal agencies, including DOD, must comply with environmental laws and regulations when disposing of real property. Pertinent environmental laws include the following: The Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) (42 U.S.C. 9601), also known as Superfund, authorizes the federal government to respond to spills and other releases or threatened releases of hazardous substances, as well as to leaking hazardous waste dumps. CERCLA provides the framework for responding to contamination problems. It requires that the government warrant that all remedial action necessary to protect human health and the environment has been taken before property is transferred by the United States to any other person or entity, such as communities or private parties. The Resource Conservation and Recovery Act of 1976 (RCRA) (42 U.S.C. 6901) was enacted to ensure that solid wastes are managed in an environmentally sound manner. The Federal Facilities Compliance Act (42 U.S.C. 6901 note) amended RCRA and provides that federal facilities may be subject to federal, state, and local penalties for environmental violations. It also establishes specific requirements for waste generated by the Department of Energy and DOD. The National Environmental Policy Act of 1969 (42 U.S.C. 4321) governs the environmental assessments and impact statement preparation for the disposal and reuse of base closure and realignment installations. CERCLA and RCRA govern much of the environmental and closure-related activities at base realignment and closure, or BRAC, installations. In compliance with CERCLA, EPA reviews DOD information to determine if the installation should be included on the NPL. The CERCLA process consists of several stages and may apply to any waste source and site containing hazardous substances at BRAC installations. (See app. I.) EPA does not have authority to delegate CERCLA enforcement to the states. However, CERCLA does call for substantial involvement by each state in initiating, developing, and selecting remedial actions to be taken. RCRA is designed to ensure that solid waste is managed in an environmentally sound manner and establishes a framework for managing hazardous waste. All BRAC installations are subject to RCRA because of practices that generated, stored, treated, or disposed of hazardous waste. RCRA, as amended in 1992 by the Federal Facilities Compliance Act, directed EPA to conduct annual inspections of federal facilities. RCRA allows EPA to authorize states to conduct equivalent state programs; in these cases, they have the primary responsibility for implementing corrective actions at a base that is designated as a treatment, storage, or disposal facility. States with an authorized hazardous waste program may inspect a federal facility to enforce compliance with state hazardous waste programs. DOD established the Installation Restoration Program in 1975 to study and clean up contaminated sites. In 1984, this program was made part of the Defense Environmental Restoration Program, and Congress provided funding through the Defense Environmental Restoration Account (DERA). In the 1990 base closure law (P.L 101-510), Congress began providing separate cleanup funding for closing and realigning installations under the BRAC account. In May 1993, DOD created the Under Secretary of Defense (Environmental Security) position to oversee cleanup and other environmental efforts. In July 1993, the administration expressed concern that closing military installations had been cumbersome and slow, with environmental cleanup and other processes taking many years to complete. At that time, it announced a five-part program to help accelerate cleanup and community reuse of closing installations. The former Chairman, Subcommittee on Environment, Energy and Natural Resources, House Committee on Government Operations, and Representative George Miller, California, requested us to review DOD’s environmental cleanup efforts at installations being closed under the BRAC process. Specifically, they asked us to review issues related to the (1) cleanup cost, transferability, and reuse of property by nonfederal users and (2) progress, difficulties, and plans to address the problems. We performed work at the Office of Secretary of Defense, military services headquarters, and EPA. To determine costs being estimated for the program, we reviewed DOD’s BRAC budget data, including justification documents submitted to Congress in February 1994. In addition, we observed BRAC cleanup plan (BCP) training sessions held in San Francisco, California, in November 1993. Later, we analyzed cost information in 79 plans prepared by installations with property to be transferred to nonfederal users. (See app. III.) We also visited closing installations and environmental cleanup design and construction management commands to determine how cost data is developed by each of the services. (See app. II.) To determine transferability and reuse, we reviewed BRAC and environmental laws, DOD and EPA headquarters policies and guidance to the military services, and environmental cleanup and reuse programs at BRAC installations. We also reviewed data developed by the services to identify uncontaminated property that would be available for quick transfer. We identified progress and plans to address problems during discussions with DOD and EPA headquarters, DOD design and construction management, and closing base officials, as well with EPA regional officials. In addition, we observed training sessions on DOD’s Fast Track Cleanup program, reviewed data in installations’ BCPs, and visited a number of these installations. Furthermore, we attended meetings of the Defense Environmental Restoration Task Force in Austin, Texas; Philadelphia, Pennsylvania; and Charleston, South Carolina. We also attended several public hearings during visits to installations, including the California Military Base Reuse Task Force, installations’ cleanup advisory board meetings, and a hearing on cleanup remedy selection. We visited the California Environmental Protection Agency, Sacramento, California, and discussed specific issues with environmental officials of other states. As requested, we did not obtain written agency comments. However, we discussed the report’s contents with DOD and EPA officials and incorporated their comments where appropriate. We performed the review between February 1994 and January 1995 in accordance with generally accepted government auditing standards. Congress, DOD, and EPA have taken actions over the past several years to address a number of important matters relevant to resolving environmental cleanup issues at bases that are being closed and realigned. However, problems still remain with determining accurate cleanup costs, timing appropriations with cleanup needs, prioritizing available cleanup funds, and protecting the government’s interests when leasing or transferring property. As reported in its fiscal year 1995 BRAC budget justification document, DOD’s total estimate for cleaning up environmental problems at 123 closing and realigning installations and activities was about $4 billion. However, more recent data developed by DOD in April 1994 shows that estimates for just 84 installations totaled about $5.4 billion, and costs are likely to go beyond that amount as more complete data becomes available. The BRAC accounts were established to be the exclusive source of funds for environmental restoration projects related to base closures. The intent was to preclude the cleanup actions from competing with other sources of funding for environmental cleanup such as the DERA. DOD’s BRAC budget estimates for cleanup cover 6-year periods; thus, the estimate for the 1988 round spans fiscal years 1990 through 1995; the estimate for the 1991 round spans fiscal years 1992 through 1997; and the 1993 round spans fiscal years 1994 through 1999. BRAC budget justification documents are to address the total financial impact of realignment and closure actions. DOD’s estimate in the fiscal year 1995 budget for the 1988 and 1991 rounds increased from the fiscal year 1993 estimate by about $400 million, to about $2.2 billion. In addition, the 1995 budget estimate included about $1.8 billion for the 1993 round, raising the total estimate for the first three rounds to almost $4.0 billion for 123 installations and supporting activities. This estimate represents the total BRAC budget through fiscal year 1999. According to DOD, these estimates increased because they were based on preliminary information, and costs depend on the type of contaminants detected, conditions found, and the cleanup technologies selected. More recent information developed by DOD in cleanup plans on 84 of the 123 installations shows an estimate of about $5.4 billion. This estimate is likely to increase as more bases are added and all costs are captured. In September 1993, as part of its Fast Track Cleanup program to accelerate cleanup and reuse of BRAC installations, DOD required installations with property that could be transferred for nonfederal use to develop comprehensive BRAC cleanup plans and to submit these plans by April 1994. The military services forwarded 79 such plans, covering 84 installations, and the estimated cleanup costs in these plans totaled nearly $5.4 billion. (See app. III.) This is about $1.6 billion more than the fiscal year 1995 BRAC budget estimates for these same 84 installations, as summarized in table 2.1. DOD officials told us that the cleanup plans required more comprehensive cost estimates than the BRAC budget estimates. They said that total environmental programs at closing and realigning bases go beyond those costs identified in the BRAC budgets. For example, some cleanup plans for Army installations needed DERA funds in addition to BRAC funds. Also, both the Army and Navy plans identified funding needed for environmental compliance and for the preservation of natural and cultural resources. Also, BRAC budget estimates cover only the 6-year period that bases are allowed to close. However, the average cleanup can take much longer. The cleanup plans include 14 installations from the 1988 round of closures that estimated they will need $536 million after the 6-year period. (See app. IV.) For example, the BRAC budget estimate for the Army’s Jefferson Proving Ground, Indiana, was about $11 million. The cleanup plan estimated it would cost $233 million, including $789,000 prior to 1989, $16.1 million in BRAC and other funding between 1990 and 1995, and $216.1 million in DERA funding after the 6-year period. The cleanup plan shows that this $216.1-million figure assumed no change in how the base was being used, and if another reuse option was selected, the total estimated cost for this one base could be $2 billion per year for fiscal years 1996 to 1999. Although the cleanup plans provide a more complete view of environmental costs at closing bases, they did not generally capture complete costs. In some cases, long-term monitoring costs may go on for many years beyond the base cleanup plan estimates. For example, Pease Air Force Base, New Hampshire, reported no costs beyond fiscal year 1999, but officials estimated it will cost $300,000 a year to monitor the groundwater for an indefinite period beyond 1999. Similarly, Norton Air Force Base, California, officials estimated long-term remedial operations will cost $38.9 million through 2010, but the Air Force’s estimate only included monitoring costs through fiscal year 1999. Furthermore, the cleanup plan estimates did not include some sites that have yet to be investigated at the 84 installations. At its Charleston, South Carolina, complex, consisting of a station, shipyard, and fleet industrial supply center, the Navy is presently investigating 39 waste management sites and has identified 330 potential areas of concern that require further study. Assessments are currently being performed of 118 of the potential areas. The remaining sites were recently identified during a site inspection, and the appropriate investigation approaches are being formulated. The Army Materials Technology Laboratory, Massachusetts, was recently added to the NPL, requiring the Army to address surface water contamination cleanup previously not planned or budgeted. EPA is currently assessing the Army’s Jefferson Proving Ground for possible addition to the NPL, and other installations could be considered for NPL status in the future. The Congressional Budget Office reported in January 1995 on unanticipated cost growth that has occurred for installations scheduled to close. It observed that cost estimates increased for 34 of 49 installations being closed because (1) DOD discovered additional sites and contaminants on its installations and (2) new technologies that could reduce costs have been slow in coming and gaining acceptance. They also said that stricter cleanup standards than planned could significantly add to the costs. As part of our review on DOD’s Future Years Defense Program, we reported in July 1994 that DOD’s environmental costs may be significantly understated. Of the nearly $4 billion identified for environmental cleanup through fiscal year 1999 in the 1995 BRAC budget estimate, $1.8 billion had been appropriated through fiscal year 1994. By June 1994, only about 55 percent of $1.8 billion had been obligated, and about $813 million was unobligated. However, by September 30, 1994, about $334 million remained unobligated, as shown in table 2.2. BRAC funds are available to be obligated during the 6-year period bases have to close. According to DOD officials, however, the services’ perception is that funds should be obligated in the year appropriated, and high unobligated balances are seen by the services as a failure to execute their programs. For example, in October 1992, the Army increased the BRAC 1991 account for Fort Ord, California, by $11.8 million for environmental restoration, stating that the Army had to obligate its current funds to receive additional 1991 funds DOD had withheld. Between February and September 1993, $10.8 million was obligated for an existing contract with provisions that the work would be fully defined and priced later. In explaining the high levels of unobligated balances, DOD said that it (1) was probably overly optimistic in the funds requested, (2) did not have all the necessary expertise to better estimate requirements and timing, and (3) experienced slow obligation rates by the installations. DOD officials told us that the unobligated balances improved between June and September 1994 because the services entered into cost-reimbursable contracts for the total design and actual cleanup of installations, instead of contracting separately for design and cleanup. As indefinite delivery/indefinite quantity cost-reimbursable contracts, they are higher risk to the government and will require closer oversight of the contractor’s operations. DOD gives the highest priority to cleaning up installations on EPA’s NPL and installations scheduled to close and realign. For BRAC installations in the 79 BRAC cleanup plans, the cost of cleanup for non-NPL installations is about $3.4 billion, compared to $2.0 billion for NPL installations. BRAC installations are given a high priority to facilitate the transfer of property to nonfederal use as soon as possible. However, most BRAC property will stay under federal ownership. Also, until 1992, CERCLA required cleanup before property could be transferred to nonfederal owners, but a 1992 amendment allows for the transfer of property before cleanup is finished under certain stipulations. Furthermore, a 1994 law allows for long-term leases to nonfederal users before cleanup is complete. We reported that DOD will not be able to efficiently institute cleanup efforts until it and EPA evaluate the large number of sites currently on the NPL and at BRAC installations and determine which should be designated as high priority. In 1990, Congress designated the BRAC appropriations account to be the exclusive source of funding for environmental restoration at BRAC installations. Congress established this separate cleanup funding because it was concerned that the higher priority being given to closing and realigning installations would exhaust all DERA funding. In the same act, Congress directed that DOD restore any property excess to its needs as a result of closure or realignment as soon as possible. High priority funding was necessary for these installations because CERCLA requires environmental cleanup to be completed before nonfederal ownership transfer and reuse can occur. Giving all closing and realigning installations the same status as NPL installations has significantly increased the number of priority installations and accelerated the funding DOD needs for high priority cleanup. Of the 84 installations identified in the cleanup plans, 21 are for NPL installations and receive priority cleanup funding consideration regardless of whether they close or realign. (See app. III.) Cleanup estimates in these 21 installations’ plans totaled $2.0 billion. However, the other 63 installations would not have been given high priority status if they were not closing or realigning. Estimated cleanup costs in plans for these installations amounted to $3.4 billion, or 63 percent of the nearly $5.4-billion total estimate. (See table 2.3.) For example, the Long Beach Naval Station and Hospital, California, are not on the NPL. However, these installations add an estimated $221 million to DOD’s priority requirements. Were these non-NPL bases not closing or realigning, they would likely receive funds only for essential cleanup and compliance activities. For example, non-NPL installations would likely receive funds to remove underground storage tanks to meet deadlines in the law, but asbestos and lead-based paint surveys not subject to a deadline might be deferred to later years. Army headquarters officials told us there had never been much DERA funding available to clean up non-NPL installations, but funds became available once an installation was identified for closure. Environmental officials at Fort Ord, California, said that before their installation was on the NPL, they had trouble competing for DERA cleanup funding. DOD officials told us that cleanup priority funding was needed for non-NPL installations because (1) BRAC funding is tied to the 6-year period allowed for bases to close, (2) legal mandates established by state law or the courts exist at some bases, and (3) communities are expecting their installations to be cleaned up as soon as possible. CERCLA allows DOD to transfer property to another service or federal agency before completing cleanup. However, the proper arrangements for cleanup must be made, and DOD’s potential liability is significant. As we reported in November 1994, DOD is retaining most of the property or transferring it to other federal agencies. It is retaining about 156,700 acres, or 63 percent of the 250,100 acres on installations from the 1988 and 1991 rounds. Some of this property is being retained because of extensive unexploded ordnance contamination. For example, at the Army’s Jefferson Proving Ground, Indiana, due to long-term hazardous waste contamination and the potential that unexploded ordnance could be found all over the installation, it is impossible to dispose of all the property. The Army was considering retaining all or part of it under a caretaker program. However, the U.S. Department of Interior’s Fish and Wildlife Service requested that most of the installation’s property be added to a national wildlife refuge. Even though these installations will not have to be cleaned up before the property is transferred, DOD and the receiving agency must agree on what remedial action will be taken. Consequently, DOD is still held responsible for the cleanup, which ultimately could involve substantial costs. According to DOD officials, DOD is responsible for cleaning up past contamination, regardless of when it is identified, and meeting the requirements of any new federal or state cleanup standards and laws. For example, at the Hamilton Army Airfield, California, ownership of a landfill on property once auctioned to a private developer has reverted to the Army. Due to the presence of contamination, the Army will now pay to contain landfill contaminants and treat the groundwater. About 93,400 (37 percent) of the 250,100 acres at closing 1988 and 1991 installations will potentially be available for transfer to nonfederal users. CERCLA had prohibited DOD property transfers to nonfederal ownership until the necessary cleanup action had been taken, but the Community Environmental Response Facilitation Act (CERFA) amended CERCLA in 1992 to expedite transfer. Under the act, remedial action is considered to have been taken if (1) the construction and installation of an approved remedial design has been completed and (2) the remedy has been demonstrated to EPA to be operating properly and successfully. Thereafter, any long-term pumping and treating or operation and monitoring after the demonstration does not preclude transferring the property. Although the CERFA amendment could eventually facilitate the transfer and reuse of property under CERCLA, most sites at BRAC installations are in the early investigation and study stages and have not reached the point where remedies are in place. An EPA headquarters official, after checking with EPA regions, told us that data is not being collected, but it is unlikely that much property has been transferred so far where remedies are in place and operating successfully. In general, DOD may only lease property that is under its control, not currently needed for public use, and not excess property. A limited exception was available for property found to be excess as a result of closure or realignment, where a military service determined that leasing would facilitate economic reuse. However, leases were subject to limitations, including a term not to exceed 5 years and DOD’s right to revoke the lease at will. As part of the National Defense Authorization Act for Fiscal Year 1994 (P.L. 103-160), Congress authorized the military services to lease property to facilitate state or local economic reuse without limiting the length of a lease. As of January 1995, the Air Force has used the provision to enter into six leases, ranging from 25 to 70 years, for airports and other uses, as shown in table 2.4. The other services had leasing actions in process. Although leasing property allows its reuse before cleanup has been completed, DOD is still liable for environmental cleanup costs. Thus, leasing still leaves the question of how the government should be protected from liability for hazardous waste that results from the current tenant’s operations. Even though DOD conducts extensive environmental surveys and includes numerous provisions in its leases to limit its liability, DOD nonetheless remains a responsible party under CERCLA. For example, between 1976 and 1986, the Navy leased most of its Hunters Point Annex—a deactivated Navy shipyard listed for closure on the 1991 round—to a commercial business, which subleased many of the buildings to other businesses. The activities conducted were primarily commercial ship repair, and the lessee was later sued by the city of San Francisco for the alleged illegal disposal of large amounts of hazardous waste. The Navy remained the owner of the property and, according to the Navy environmental coordinator, has included these sites in its BRAC cleanup program. Other issues affecting leases are (1) the time and effort required to complete the environmental documents and processes necessary to satisfy federal and state laws and DOD policies and (2) the obligation of the services to monitor and manage the property and environmental requirements. Although various actions have been taken in recent years, Congress, DOD, and local communities still face a number of difficult issues related to (1) obtaining accurate cost estimates for completing cleanup efforts at closing and realigning bases, (2) determining the proper timing of appropriations to meet cleanup needs, (3) determining whether, in view of limited resources and changes in law, all closing and realigning bases should be given priority funding, and (4) facilitating the transfer of property to federal and nonfederal users while ensuring the government’s and DOD’s interests are protected. In particular, we believe high priority funding for environmental cleanup at closing and realigning installations needs to be reevaluated because most property will stay under federal ownership, and property that will be available for nonfederal ownership transfer can now be leased or reused before it is entirely clean. It appears that DOD could be more selective and designate priority funding for NPL installations and other sites where cleanup is required for nonfederal reuse. This might reduce DOD’s requirements for accelerated funding for nonpriority sites and spread these costs into more appropriate future budget years. Also, although property remaining as federal lands does not have to be cleaned up before transfer, DOD appears to be retaining much of the responsibility for cleanup. Accordingly, DOD needs to include these potential unfunded liabilities in its total environmental program cost estimate. We recommend that the Secretary of Defense develop a total environmental program cost estimate of the financial impact of realignment and closure actions that reflects a more complete description of the costs as identified in the installations’ BRAC cleanup plans, including estimates for compliance, preservation of natural and cultural resources, and long-term costs associated with cleanup and monitoring; and unfunded liabilities where property is being retained by the federal government and cleanup will be deferred. We also recommend that the Secretary of Defense approve sites for high priority environmental funding only when cleanup or compliance is required or cost-effective for nonfederal reuse to occur. Most sites at closing and realigning installations are still being investigated and studied. Thus, the full extent of cleanup actions required may not be known for years. Also, installations may not be cleaned up by the time they close, and major groundwater, landfill, and unexploded ordnance sites will remain contaminated unless new technology is developed. Dissatisfied with the slow pace of cleanup that had occurred, DOD designed the Fast Track Cleanup program in 1993. Although the program has made some progress, it could be improved in such ways as adding performance measures to gauge progress. DOD’s guidance for preparing cleanup plans called for installations to account for all sites requiring restoration and to summarize their environmental compliance programs. For example, installations identified cleanup requirements, such as fuels, solvents, unexploded ordnance, and other contaminants in training and maintenance areas, landfills, burn pits, fuel stations, wastewater treatment areas, and at other sites. They reported on programs to remove asbestos, radon, and lead-based paint from buildings and other structures as well as inventories of underground storage tanks that held fuel, waste petroleum, and other products. The 84 installations included in the cleanup plans reported that most environmental cleanup work was still in the early stages. For example, 49 of the installations combined many contaminated sites into “operable units” for more effective cleanup management. They reported that work on 165 of 239 units, or 69 percent, was in the earliest phases—remedial investigation and feasibility study. The plans estimated that 129 of the 165 units would not complete this phase until fiscal years 1995 to 1998. Most of the work at the remaining installations was still in the remedial investigation and feasibility study phases as well. According to DOD officials, technology exists for the cleanup of many sites, but it needs to be made more efficient and cost-effective. We reported that the CERCLA progress is sluggish because the study and evaluation process is lengthy, cleanups are complex, existing technology takes a long time, and the average cleanup can require about 10 years. Contaminated groundwater, landfills, and unexploded ordnance were identified in many installations’ cleanup plans. (See app. III.) Some large contaminated sites cannot be cleaned up because either knowledge and expertise does not exist or has technology or cost limitations. At these sites, interim cleanup actions are being used, and the sites will remain contaminated unless new removal technology is developed. Remedies to contain contamination require significant long-term operation, maintenance, and monitoring efforts as well as further cleanup actions if contamination recurs. A 1990 EPA study showed that containment remedies may initially be less expensive to construct, but the required operation and maintenance and the potential for failure increase their cost in the long run. Containment at BRAC installations for major groundwater, landfill, and unexploded ordnance sites will likely require cleanup efforts over many years. Decontaminating polluted groundwater, an issue identified in 51 of the 79 cleanup plans, is costly, difficult, and sometimes impossible. Once contamination is detected, the uneven flow of groundwater and the redistribution of the contaminants make cleanup difficult. According to EPA, the technical challenges of eliminating groundwater contamination are complex and efforts to speed up the process have been expensive and achieved limited success. For example, one of the most commonly used groundwater cleanup technologies is “pump and treat,” where contaminated water is pumped to the surface for treatment. However, this technology can cost millions of dollars, take decades, and still leave groundwater contaminated. Pump-and-treat systems were in place or planned for at least 24 of the installations identified in appendix III. Figure 3.1 shows an example of a pump-and-treat remediation project. Pump-and-treat systems may need to be tested over several years to determine their effectiveness. For example, at two installations we visited, Norton and George Air Force Bases in California, pilot systems were in place, but officials said they were operating at about one-half of the capacity because the groundwater did not flow as expected. They said the number of wells for these systems will need to be increased for sufficient water to flow, and even if successful the systems may need to operate for 30 years or more. At Norton Air Force Base, groundwater contamination extends from the central base area, toward the southwest in the direction of groundwater flow beneath the base, and continues beyond the base boundary. There are several community water wells near the base within the anticipated path of the contaminants. Furthermore, the pump-and-treat technology does not work on some contaminants, according to EPA. These contaminants include certain organic compounds, such as chlorinated solvents, polychlorinated biphenyls (PCBs), creosote, and some pesticides. They are difficult to locate and remove and may continue to contaminate groundwater for hundreds of years, despite best efforts to clean them up. Contaminated landfills were identified in 67 of 79 cleanup plans for closing and realigning installations and may pose some major environmental threats, particularly for groundwater. (See app. III.) Although small landfills can be removed and eliminated, it is not practical to remove all waste and contamination from larger ones. National standards do not exist for cleaning up most contaminants in soil, so DOD, EPA, and state regulators negotiate standards for each site. Large landfills are often treated by placing a protective cap over the site to contain the waste and prevent further contamination of the soil, groundwater, and atmosphere. The groundwater conditions around the landfill must also be assessed to determine whether contamination exists, and, if necessary, identify the cleanup measures. Figure 3.2 shows a landfill excavation and a soil removal project. Landfills that close where waste has not been removed are also subject to EPA requirements for maintenance and groundwater monitoring 30 years after the landfill is closed. These requirements were established because of the potential for environmental problems after closure. EPA or the state must determine that closed facilities have complied with all regulatory requirements. If not, the facilities must be brought into compliance. Unexploded ordnance is ordnance that has failed to function as designed, has been abandoned or discarded, and is still capable of exploding and causing injury. It results from operations conducted at weapons test and training ranges and contains explosive, petroleum, metal, and other compounds that may contribute to soil and water contamination. If unexploded ordnance is buried below 3 feet, current technology may not be able to detect it, and it can migrate to the surface over time. Consequently, surface cleanup may need to be repeated. Unexploded ordnance and related waste were identified at 25 closing installations, including some installations where the contaminated areas are so large that cleanup technology is not affordable. For example, unexploded ordnance is potentially present on about 51,000 acres of the Army’s Jefferson Proving Ground, Indiana; 7,200 acres of the Army’s Fort Ord, California; and an unspecified amount of property at the Navy’s Mare Island Shipyard, California. Current removal technology is labor intensive, costly, and unreliable. Figure 3.3 shows a portion of a munitions firing range that contains unexploded ordnance. According to Army ordnance and other officials, new and more cost-effective technology needs to be developed for cleaning up the unexploded ordnance. A study at the Army’s Jefferson Proving Ground, which has extensive quantities, types, and dispersion of unexploded ordnance, found that the cleanup effort would be labor intensive. For example, the work would require using metal detectors for the majority of land, mapping the unexploded ordnance, handling or removing it, and disposing of it. Because the installation potentially has 51,000 contaminated acres and is heavily forested, current cleanup technology is not practical or affordable. Although the cleanup plan included $216 million in estimated costs, the plan noted that costs could run to $2 billion a year for several years, and officials said other estimates for cleanup have ranged from $5 billion to $8 billion in total, depending on how the property is to be reused. Figure 3.4 shows an example of buried unexploded ordnance. The closure of military installations and extent of unexploded ordnance have intensified the need for DOD and EPA headquarters and states to address many unresolved issues related to unexploded ordnance. These issues concern costs and cleanup requirements, when unexploded ordnance becomes a hazardous material, when DOD turns over control to EPA and states, and which laws apply to cleanup. The 1992 Federal Facilities Compliance Act amending RCRA required EPA to propose, after consulting with the Secretary of Defense and appropriate state officials, regulations identifying when military munitions become hazardous waste and providing for its safe transportation and storage. The deadline for the proposed regulations was October 1994. EPA officials told us in January 1995 they missed that deadline and now plan to propose the guidelines in July 1995. Containing and cleaning up contamination depends on developing new, affordable technologies, but these technologies will take time to develop. We recently reported that the process of choosing a new technology involves many decisionmakers, technical expertise, and competing interests. The pressure to meet cleanup milestones also influences the technology evaluation process and the solutions accepted. The reasons why new technologies are not adopted faster include the following: Conflicting priorities prevent the approval of innovative approaches for cleanup. Field officials may associate the newer technologies with unacceptable levels of risk. On-site contractors may favor particular technologies on the basis of their own experiences and investments. In May 1993 testimony, DOD recognized that its environmental program could be improved by directing cleanup efforts to meet potential users’ needs. DOD said it intended to (1) target environmental technology to high payback areas, (2) apply research and demonstration funds to real environmental needs, and (3) get support from regulators, states, and the public for testing and fielding innovative technologies. Subsequently, in 1994, DOD began looking at technologies with high potential and ranking them according to potential benefits and feasibility. DOD officials said they plan to begin demonstrating technologies and offer them to EPA and state regulators for validation in 1995. DOD established the Fast Track Cleanup program in July 1993 to accelerate the environmental cleanup at closing installations. The program was initiated under the five-part program the administration designed to expedite the environmental cleanup and economic recovery of communities affected by installation closures. Progress in the Fast Track Cleanup program’s five key elements has been as follows: Environmental impact statements depend on communities submitting reuse plans, and most of these plans have not been developed. Restrictive indemnification language has been clarified. Uncontaminated parcels from the 1988 and 1991 closing installations have been identified for transfer, but not as much uncontaminated property has been identified as hoped. Teams have been established at closing bases to make decisions and develop the cleanup plans, but decisions are still made above the base level, and bases’ cleanup plans can be improved. Community cleanup advisory boards that involve the public in the cleanup program have been established, but can be improved. The program is not fully implemented, and it is too early to comprehensively judge its effectiveness. However, DOD has made some progress in implementing certain elements of the program, but further development is necessary. The Fast Track Cleanup strategy paper stated that the process for preparing an environmental impact statement typically takes 28 to 48 months. The Fast Track Cleanup program requires the military services to complete the environmental impact statement within 12 months of a community submitting its final reuse plan. However, community reuse plans are not completed for many of the installations submitting cleanup plans. According to service officials, they anticipate being able to complete the statements within the 12 months allowed once reuse plans are received. The Fast Track Cleanup program concluded that indemnification language in DOD’s 1993 appropriations act unintentionally caused DOD to slow down granting interim leases. DOD’s authorization and appropriations acts for 1993 contained different provisions regarding the government’s liability for the transfer of contaminated property. DOD viewed the provisions of the appropriations act as exposing the government to costly claims because of sweeping DOD indemnification language in the law. In response, DOD stopped entering into any leases or transferring property for fear of future claims. Congress subsequently repealed the appropriations language and let the authorization language stand, which limited DOD’s liability to past problems. DOD has proceeded with efforts to lease and transfer property. An issue that arose early in the BRAC process was whether property could be transferred to parties outside the federal government without the entire installation being cleaned up. Subsequently, Congress enacted CERFA in 1992, which allowed an installation to be divided into parcels that could be considered separately for transfer. CERFA directs federal agencies to identify uncontaminated parcels based on the specific requirements set forth in CERFA. For parcels that are on a NPL installation, EPA must concur with the results. For parcels on non-NPL installations, appropriate state officials must concur. The deadline for identifying all parcels on BRAC 1988 and 1991 installations, including EPA or state concurrence, was April 19, 1994. DOD officials told us that CERFA did not work as expected. Although considerable resources have been spent, the anticipated numbers of uncontaminated parcels available for quick transfer and reuse have not been identified. Furthermore, they said that data was not readily available, but they believed little of the uncontaminated property that was identified had been transferred. They also said the developed land on the installations is often the most desirable for immediate reuse, but this property tends to be contaminated. However, DOD officials commented that one benefit of the CERFA process has been that DOD identified the condition of the property at these installations, and this information will be extremely useful in leasing and later transferring contaminated property. DOD records showed that of about 250,100 acres at 1988 and 1991 closing installations, the services identified about 121,200 acres as uncontaminated; however, the regulators only concurred that 34,499 acres were uncontaminated. Table 3.1 shows uncontaminated acreage at closing 1988 and 1991 installations that did receive regulatory concurrence. The regulators did not agree that many parcels were uncontaminated because activities related to compliance—asbestos removal, lead-based paint surveys, and resolution of issues related to petroleum—were not completed. Also, state regulators were not willing to concur because of concerns about the state’s potential liability. At Fort Wingate, New Mexico, the Army identified 17,353 of 21,812 total acres as uncontaminated, but the state regulator did not concur on any acreage. Likewise, the Air Force identified 1,323 of 3,216 acres at Bergstrom Air Force Base, Texas, as uncontaminated, but the state regulator did not concur. Of the 34,499 uncontaminated acres, about one-half is on property the federal government is retaining and one-half is on property available for transfer to nonfederal users. However, according to DOD, the uncontaminated property is usually undeveloped, remotely located, or linked to contaminated parcels and cannot be used separately. For example, about 7,000 of the uncontaminated acres at Fort Ord are considered unusable because, according to DOD officials, the acreage is in an undeveloped part of the installation that has no access to a usable water supply. Also, at George Air Force Base, environmental officials said much of the property identified as uncontaminated surrounds the runways and cannot be separated from the flightline. The Fast Track Cleanup program concluded environmental decisions were taking too long to make and required each installation to establish a team consisting of EPA, DOD, and state representatives that would be empowered to make decisions quickly. Officials at some closing installations we visited told us they already had teams but were not empowered to make decisions at the local level. EPA issued draft guidance on empowerment to its installation-level team members in March 1994, but did not mandate that it be followed. According to EPA officials in January 1995, EPA has delegated to the regions the necessary authority to make decisions, and the regions have established procedures to ensure that management approval is redelegated or provided to the installations’ cleanup teams in a timely manner. The Air Force also issued guidance on empowerment to its installation-level team members in April 1994. This guidance delegated some key decision-making authority to mid-level managers, but not to the installation team members as originally envisioned. Various DOD and EPA officials told us that their agencies try to avoid legal problems by reviewing and approving decisions made at the local level, and states do the same thing. According to Navy officials, in one case, the state representative for environmental cleanup at the Marine Corps Air Station Tustin, California, decided in a local meeting on a particular action because the state environmental agency had approved a similar remedy at the Presidio of San Francisco, California. However, the state overruled the installation representative. DOD provided guidance and training on the development of BRAC cleanup plans. The plans were to provide a comprehensive and consolidated strategy for expedited environmental cleanup at all BRAC installations. DOD stated that the cleanup plans should support the BRAC budget submission. The cleanup plans developed to date are not of the quality described in the guidance document. DOD officials told us, for example, that sections in some plans were incomplete and had not been thoroughly reviewed, and data was viewed as somewhat unreliable. A contractor’s review of 77 BRAC cleanup plans in June 1994 identified a lack of uniformity in the plans due to (1) different levels of progress among installations based on the year the installation was designated for closure, (2) short time frames for completing the plans, and (3) various installation interpretations of guidance for the plans. At installations we visited while cleanup plans were being compiled—Norton Air Force Base, the Jefferson Proving Ground, and the Army Materials Technology Laboratory—officials said that they did not have time to develop complete plans for expediting cleanup and meet reporting deadlines, so they reported (1) existing information in the cleanup plan format directed by DOD or (2) the information had to be developed and would be provided later. DOD officials recognized that the time available for the services to develop cleanup plans was not sufficient and now view the April 1994 plans as a first effort. They are considering possible improvements in developing the BRAC cleanup plans, but have not established milestones for the services to submit more complete plans. DOD guidance for the Fast Track Cleanup program directed the military services to improve public involvement in the environmental cleanup process. For each installation with property to be transferred or with sufficient community interest, DOD requires the formation of cleanup advisory boards comprised of members of the local community and jointly chaired by a military service representative and a member of the community. DOD’s guidance said these advisory boards are key to installations being responsive to community concerns. DOD’s goal of having fully functioning cleanup advisory boards in place may take time. These advisory boards at closing installations are in the early stages of development. According to the contractor’s review of 77 cleanup plans, about one-third of the installations had not yet formed cleanup advisory boards. Also, at installations with boards, only about half of the boards participated in developing the BRAC cleanup plans. Furthermore, we reported that EPA, in a similar effort to establish advisory boards, had not been able to earn the public’s trust due to differing interests, even with the best intentions and community relations outreach. On the basis of our observations at some of the BRAC community advisory board meetings we attended and in discussions with DOD officials, it appears that DOD may face similar difficulties. DOD officials recognized that the Fast Track Cleanup program lacked a baseline and performance measures. As a result, they have begun developing measures for the program, but have not set a target date for completing this effort. As of December 1994, just two measures of effectiveness were being considered: (1) the percentage of closing bases with a completed environmental impact analysis and (2) the percentage of property at closing bases that could be made available for reuse. These measures do not seem to adequately address performance. The first measure addresses an element that is not considered a problem. The second measure does not precisely measure environmental cleanup actions if leases are used. Also, these two measures do not address program elements concerned with timely decisions being made on installations’ cleanup, the number of installations with fully developed and effectively implemented cleanup plans, and the extent and effectiveness of public involvement in the cleanup process. Most sites at closing and realigning installations are in the early stages of the cleanup process. Cleanup is costly, difficult, and sometimes impossible, and technology does not exist or has serious limitations when it comes to cleaning up massive amounts of contaminated groundwater, large landfills, or extensive areas with unexploded ordnance. Furthermore, new technology will take time to develop. The Fast Track Cleanup program is being implemented and has helped the cleanup process, but some elements of the program need further development. For example, CERFA has not produced the expected results. Expectations that installation cleanup teams could be empowered to make decisions were probably unrealistic, as was the deadline for installations to develop base cleanup plans. There is a need to establish standards that will allow DOD to assess the various measures taken to speed up the cleanup process. We recommend that the Secretary of Defense establish Fast Track Cleanup program standards that will allow DOD to assess the steps taken to accelerate the cleanup process at BRAC installations.
Pursuant to a congressional request, GAO reviewed the environmental cleanup of Department of Defense (DOD) facilities slated for closing, focusing on: (1) the cleanup cost, transferability, and reuse of property by nonfederal users; and (2) DOD progress, difficulties, and plans to address the problems. GAO found that: (1) despite DOD actions to resolve environmental cleanup issues at bases slated for closure or realignment, problems remain with determining accurate cleanup costs, timing appropriations with cleanup needs, prioritizing available cleanup funds, and protecting the government's interests when leasing or transferring property; (2) cleanup costs will probably exceed the current DOD estimate of $5.4 billion because of additional cleanup needs and longer cleanup periods; (3) DOD could postpone clean up of some bases until after closure, since they will remain federal property or be under long-term lease to nonfederal users; (4) cleanup progress has been limited, since DOD is still studying the most contaminated sites; (5) the full extent of DOD cleanup actions may not be known for years; (6) some bases may not be cleaned up by the time they close, partly due to the need to develop new technology to cleanup groundwater, landfills, and unexploded ordnance sites; and (7) DOD has developed a fast track cleanup program to accelerate base cleanups, but it needs to improve program implementation.
The first U.S.-Japan insurance agreement was signed on October 11, 1994, and was concluded under the United States-Japan Framework Agreement. In negotiating the insurance agreement, the U.S. government sought to establish that deregulation of the large primary sector of the Japanese insurance market, where U.S. firms had experienced only limited success, would be required before deregulation of the smaller third sector, where foreign companies have a substantial presence, would occur. According to the Office of the U.S. Trade Representative (USTR), while the third sector accounted for roughly 5 percent of the total Japanese insurance market in Japanese fiscal year 1997, foreign market share for this sector was over 40 percent—much higher than in the traditional, primary insurance market. U.S. government and industry officials believed that the lack of U.S. company success in the larger primary sector was the result of a heavily regulated environment that did not allow for new market entry, product innovation, or price competition. In the 1994 agreement, the United States met its negotiating objective of establishing that primary sector deregulation would be required before third sector deregulation would occur. Under the agreement, Japan agreed to avoid “radical change” in the third sector until foreign insurance companies were granted a “reasonable period” to compete in a significantly deregulated primary sector market, although the terms “radical change” and “reasonable period” were not defined in the agreement. The agreement recognized that Japan was in the process of reforming its insurance sector, noting that the reform would be based on promoting competition and enhancing efficiency through deregulation and liberalization. Consistent with this reform initiative, the agreement included specific commitments by Japan to deregulate the primary sector. For example, the agreement provided that insurance companies would be afforded greater flexibility in establishing the price (rate) they would charge to customers for certain product lines. In addition, the agreement stated that Japan would expedite and simplify the application review process for the approval of insurance products and rates by gradually introducing expedited approval systems for certain products. Japan also agreed to make its regulatory process more transparent by, for example, publishing and making publicly available the standards that insurance regulators will apply in reviewing applications for approval of new insurance products. During subsequent negotiations in 1996, the two governments reached an interim understanding in September, in which Japan agreed to allow direct sales of automobile insurance to consumers by mail or telephone and established restrictions on sales by subsidiaries of large Japanese insurers of some third sector insurance products. The commitment to allow direct sales of automobile insurance is referred to in the final December 1996 agreement (discussed below), while third sector commitments were largely superseded by measures contained in the December 1996 agreement. The second agreement was signed on December 24, 1996. This agreement was negotiated in response to U.S. insurance company concerns that the Japanese government was preparing to allow large Japanese insurers increased access to the third sector through their subsidiaries in violation of the 1994 agreement. The 1996 agreement further defined restrictions on third sector entry by Japanese companies, and it clarified when these restrictions would be lifted by more explicitly linking them to substantial deregulation of Japan's larger, primary sector. Specifically, the agreement listed five deregulation criteria for the primary sector that would have to be met by July 1, 1998, in order to start a 2.5-year countdown toward opening the third sector no later than 2001. These criteria reflected specific deregulation commitments in the agreement, such as allowing for greater pricing flexibility for automobile insurance and applying a system to expedite marketing of additional products. The two governments recognized that if, on July 1, 1998, there were disagreement about whether the criteria had been met, each side would be able to act in accordance with its own view of whether the criteria had been met. The U.S. government has stated that, in the case of disagreement over implementation, it can invoke various trade remedies. On July 1, 1998, USTR announced that, in its view, Japan had not fully implemented key agreement commitments including two of the five primary sector deregulation criteria. As a result, USTR did not (and still does not) support initiation of the 2.5-year countdown to open the third sector to increased competition in 2001. The Japanese government stated that it believed it had fully implemented all commitments, including the five primary sector deregulation criteria. Thus, in its view the 2.5-year “clock” began on July 1, 1998, and restrictions on the ability of large Japanese insurance companies to operate in the third sector will be lifted on January 1, 2001. The agreement also contains a commitment by Japan to take steps to increase the number of staff responsible for processing insurance applications. In 1998, Japan enacted legal and regulatory changes that affected the insurance industry: Japan reorganized its financial regulatory system and created the Financial Supervisory Agency (FSA). Responsibility for licensing, application processing, surveillance, and inspection of the insurance industry was shifted from the Ministry of Finance (MOF) to FSA in June. Japan agreed to include most of the commitments contained in the 1996 agreement as part of its obligations in the World Trade Organization (WTO) financial services agreement. The WTO can therefore be a forum for resolving disputes related to these commitments. The commitments, which were codified in Japanese legislation that took effect on July 1, 1998, included deregulating the primary sector and restricting sales of certain third sector products by Japanese insurers. In our January 1999 survey, almost all of the U.S. companies (12 of 13) and brokers (2 of 3) operating in Japan reported that overall, the Japanese government had implemented the 1994 and 1996 agreements to a moderate or greater extent. Our analysis of company responses to our survey indicates that the Japanese government has implemented most of its commitments to improve transparency and procedural protections and deregulate the insurance market. Most of the companies (10 of 13) and brokers (2 of 3) reported that both agreements had enhanced their ability to compete in Japan, and a few companies attributed increased sales and market share to actions taken by Japan under the agreements. Companies, however, reported that a few of the agreements' commitments in the areas of transparency, deregulation, and third sector protections had not been fully implemented. The 1994 agreement included specific commitments by the Japanese government to provide greater regulatory transparency and improve application processing procedures. Our analysis of company responses to our survey indicates that most of these commitments have been implemented. For example, most companies (10 of 13) reported that they have been given meaningful access to insurance regulators. Further, 10 companies reported that they had received equal treatment in insurance industry groups. Ten companies also reported that they were not required to coordinate their applications with other insurance providers (which may be potential competitors) and that acceptance of their applications had not been conditioned or delayed based upon whether they consulted with other insurance providers, which had been experienced by some U.S. companies in the past. Our analysis of company responses to our survey indicates that Japan has implemented most of its deregulation commitments in the 1994 and 1996 agreements. Moreover, companies reported that several specific commitments had been fully implemented. As part of the 1996 agreement, the Japanese government agreed to meet five deregulation criteria: (1) processing applications for differentiated types of automobile insurance within a 90-day period, (2) further liberalizing commercial fire insurance, (3) expanding the “notification system,” (4) removing the requirement to use insurance rates calculated by rating organizations, and (5) processing applications within a 90-day period for differentiated products or rates. The first four criteria apply only to non-life insurers, while the fifth criterion applies to both life and non-life insurers. According to the agreement, once all of these criteria are met, the 2.5-year countdown toward opening the third sector to increased competition will begin. In our January 1999 survey, companies reported that the Japanese government had largely met the five primary sector deregulatory criteria. All but one of the U.S. non-life companies expressing an opinion reported that Japan had met the four criteria that apply only to non-life products (processing of differentiated auto insurance within 90 days, further liberalization of commercial fire insurance, expansion of the notification system, and removal of the requirement to use rating organization rates). This one company reported that expansion of the notification system was incomplete. Regarding the fifth criterion that requires approval of applications for differentiated products or rates within a standard 90-day processing period and applies to all insurance companies, over half of the companies (7 of 13), representing almost 60 percent of U.S. premiums in Japan, reported that the Japanese government had met this commitment. This view is consistent with our survey data on application processing, which showed that of all approved applications submitted by U.S. insurance companies since completion of the 1996 agreement, 95 percent were approved within 90 days of submission, while 5 percent took more than 90 days to receive approval (though this information is insufficient for determining whether these last cases constitute violations of the agreement). In addition, the 1994 and 1996 agreements included commitments by the Japanese government to improve the distribution of insurance products through the approval of a direct response system (for example, marketing over the telephone) for automobile insurance and the licensing of brokers. We found that the Japanese government implemented these commitments. Most companies reported that the overall deregulatory actions taken by Japan to implement both the 1994 and 1996 agreements had a generally positive effect on their ability to compete in Japan, and several cited specific examples of being able to introduce new products or rates that they viewed as beneficial. For instance, one non-life insurer reported that obtaining approval to offer a differentiated type of automobile insurance had a very positive effect on its ability to compete in Japan. Also, two companies viewed the increased liberalization of commercial fire insurance and the expanded notification system as positive. Concerning Japan's actions to improve distribution channels for insurance, of the three non-life insurers who had received approval to offer automobile insurance through the direct response system, one reported that this method of distributing insurance products had a very positive effect on its ability to compete in Japan. In addition, two of the three brokers reported that the Japanese government's decision to recognize brokers had a very positive effect on their ability to compete in Japan, though about half of the insurance companies reported that this event had no effect. However, all brokers told us that they continued to face certain obstacles in Japan, including a lack of price and product differentiation, restrictions on the types of products they can offer, and restrictions on the structure of their business operations. Several companies reported concerns regarding Japan's implementation of a few commitments in key areas. Concerning one transparency commitment, almost half of the companies (6 of 13) reported that the Japanese government had done little to publish and/or make publicly available licensing, product, and rate approval standards. Regarding Japan's deregulation commitments, five companies expressed a belief that Japan had not fully implemented its commitment to process applications for differentiated products within 90 days. Three companies reported cases where applications that were for new-to-market products or that used a new distribution channel took longer than 90 days to receive approval. Over half of the companies reported that in general Japan has done little to expedite and simplify the application review process. Further, regarding a commitment related to Japan's ability to meet its application processing requirement, all 13 U.S. companies indicated that Japan had not increased the number of staff responsible for processing applications. Company officials attributed problems with timely processing to this lack of staffing. Under the 1994 agreement, the Japanese government committed to avoid “radical change” in the third sector until foreign, as well as small and mid-sized Japanese, insurers had had a reasonable period of time to compete in a deregulated primary sector. Six companies, representing over 80 percent of U.S. premiums, reported that the Japanese government had not taken sufficient action to avoid “radical change” in the third sector. The 1996 agreement included specific commitments by the Japanese government to prohibit or substantially limit large Japanese insurers' subsidiaries from marketing certain third sector products in the life and non-life areas. In the life insurance area of the third sector, Japan committed to prohibit Japanese subsidiaries from selling stand-alone medical and stand-alone cancer insurance. Two U.S. life insurance companies in Japan reported that the Japanese government had not met this commitment. One U.S. life insurance company reported that the Japanese government had failed to prevent Yasuda Fire and Marine, a large Japanese company, from selling stand-alone cancer insurance through its relationship with INA Himawari, a life insurance subsidiary in Japan of the U.S. company CIGNA Corporation. (See app. IV for detailed information on certain USTR actions related to these companies.) Another U.S. life insurance company reported that a Japanese insurer, Tokyo-Anshin, was effectively selling stand-alone cancer insurance even though the company offers it as a rider to a base life insurance policy. In the non-life insurance area, seven restrictions on sales by Japanese subsidiaries were put in place by the 1996 agreement, primarily to protect the existing sales networks of foreign insurers for personal accident insurance. Among U.S. non-life companies expressing an opinion, all reported that Japan had met most commitments in this area, though three companies reported that the Japanese government had not complied with one commitment—restricting sales of personal accident insurance endorsed by interindustry associations. The U.S. government has given the insurance agreements high-level attention and monitors them on an ongoing basis. USTR is the principal U.S. government agency responsible for monitoring and enforcing the insurance agreements. The U.S. embassy in Tokyo also plays a major role, with the Departments of Commerce and State providing additional assistance. The Departments of the Treasury and Justice play much less active roles. USTR officials reported that they hold interagency meetings at least once every 2 months, and more often as issues arise, to discuss the status of the insurance agreements. USTR and the U.S. embassy in Tokyo rely mainly on industry groups and individual companies for information on the status of the agreements' implementation. USTR attempts, but is not always able, to thoroughly verify the accuracy or completeness of industry data on implementation. In monitoring the agreements, USTR has determined that Japan has made progress in deregulating its insurance industry but has identified key commitments that remain unmet. USTR's Japan Office, the office with primary responsibility for monitoring and enforcing the insurance agreements, currently has a total staff of four permanent employees and one temporary employee from the State Department, twice the amount of people that it had 4 years ago. However, the lead USTR official for Japan insurance issues announced his departure in September 1999. This office is responsible for monitoring approximately 20 trade agreements negotiated under the current and previous administrations that cover diverse issues such as telecommunications and autos and auto parts. USTR's Offices of the General Counsel and Services, Investment, and Intellectual Property also provide assistance with the insurance agreements when necessary. USTR has estimated that its efforts, combined with those of the U.S. embassy in Japan, constitute about 80 percent of total U.S. government efforts to monitor and enforce the Japanese insurance agreements. According to USTR, these two agencies confer on the agreements almost daily. USTR also estimated that the Commerce Department contributes about an additional 10 percent of U.S. government monitoring and enforcement efforts and reported that the Treasury Department's role is limited. According to our survey, U.S. insurance companies in Japan have communicated most frequently with staff from the U.S. embassy in Tokyo and USTR regarding the agreements. According to our survey, six U.S. insurers, which account for over 80 percent of all U.S. premiums generated in Japan, believed that USTR does not have sufficient resources (personnel, funding, and so on) to monitor and enforce the insurance agreements. USTR officials reported that two Japan Office employees have worked part-time on insurance and that more people are needed to work on insurance and other U.S.-Japan trade issues. Moreover, a 1998 USTR document noted that the U.S. Trade Representative spent more time on Japan insurance during much of 1998 than on any other single issue. USTR has reported that coordination among U.S. government agencies to monitor the insurance agreements takes place about every 2 months and becomes more frequent prior to consultations with Japan. Meetings are called as needed rather than being regularly scheduled in advance, a circumstance that USTR officials view as typical for the agency. According to a USTR official, there are no minutes or records of decisions for these meetings. Typically, the Deputy Assistant U.S. Trade Representative for Japan notifies about a dozen other U.S. government officials of meetings on insurance. An exception to the usual working-level nature of the process occurred in the spring and summer of 1998. Spurred by congressional interest, the process was elevated to a more senior level, and more agencies participated during two interagency reviews of the activities of one U.S. insurance company and its Japanese partner. One of these reviews reached the Cabinet level. USTR officials have stated that it is difficult to get all agency representatives to consistently attend meetings because these agencies' offices have to focus on too many other issues to spend much time on the U.S.-Japan insurance issue. One USTR official noted that, as a result of budget pressures and declining staff levels, agencies choose to focus on issues where they have the lead. In addition, a lack of personnel with technical insurance industry knowledge and frequent personnel turnover in certain agencies limit the understanding of issues among the interagency participants. (Insurance is not regulated at the federal level in the United States.) Decisions typically depend on consensus among those participating, rather than on formal clearance with each official on the meeting notification list. For monitoring and enforcing the agreements, USTR and the U.S. embassy in Tokyo rely primarily on information provided by U.S. insurance companies and industry groups, as well as on information collected by officials at the U.S. embassy in Tokyo from Japanese sources. For example, USTR relied heavily on information provided by the U.S. insurance industry in Japan to make its July 1, 1998, decision that the Japanese government had not met key primary sector deregulation criteria stipulated in the 1996 agreement. (See app. III for further information on USTR's key monitoring and enforcement decisions.) USTR officials report that while neither USTR nor the U.S. government in general possess the resources or technical capabilities to independently investigate or verify this type of information, the agency does make an effort to do so by consulting with experts and industry analysts. USTR officials and an economic officer at the U.S. embassy in Tokyo report that one large U.S. insurance provider is a key source of information on the Japanese insurance market. This company provides the U.S. government with information on the insurance industry and identifies and provides details on problems with the agreements' implementation. The embassy official speaks with representatives from this provider several times a week. According to USTR, without this company's assistance, much of what the U.S. government has accomplished in encouraging deregulation of the Japanese insurance market would not have been possible. USTR and embassy officials also gather information from several other U.S. insurance companies; the embassy official speaks with representatives from these companies about once a week or once every few weeks in order to obtain as complete a perspective as possible on various issues. In addition to using information from individual companies, USTR relies on several industry groups to identify and explain insurance issues. These groups include the American Chamber of Commerce in Japan (ACCJ), the American Council of Life Insurance (ACLI), the Coalition of Service Industries, the International Insurance Council, and the Foreign Non-Life Insurance Association (which is located in Japan). Company participation in these groups varies, and no one group has a membership that includes all U.S. participants in the Japanese insurance market. Some U.S. insurance companies have noted that even the associations to which they belong do not always capture their views on insurance issues. However, USTR officials maintain that they solicit competing viewpoints in cases where companies are in disagreement. Further, insurance experts at the state level from the National Association of Insurance Commissioners have joined USTR in meetings with Japan on insurance issues. For example, the association hosted working-level consultations between the U.S. and Japanese governments in April 1999. The support of these technical experts helped create a dialogue between U.S. and Japanese regulators on new ways to ease the product approval process. As part of its monitoring efforts, the U.S. government has reported that Japan has made some progress in deregulating the primary insurance sector. According to a recent U.S. embassy document on Japan's insurance reforms, there is evidence that deregulation has been taking hold, with new entrants into the life and non-life primary sectors, stronger linkages between foreign and Japanese firms, and examples of product and price competition. However, in July 1998 (and again in April 1999) USTR reviewed the state of implementation and determined that Japan has not implemented certain deregulation actions called for in the 1996 agreement. Specifically, USTR stated that the Japanese government has not fully implemented its obligations regarding the reform of rating organizations that have historically established prices for major non-life insurance products, and regarding the timely processing of new product and rate applications. As a result, USTR does not support initiation of the 2.5-year countdown toward opening the third sector. In addition, USTR said that Japan violated third sector protections by licensing a cancer insurance product to a large Japanese insurance company. (For more information on how the U.S. government reached these conclusions, see app. III.) The Japanese government has stated that it has fully implemented both agreements, including all deregulation actions. Therefore, on July 1, 1998, Japan began its countdown of the 2.5-year period before opening the third sector to increased competition. Further, Japan reports that the approval of the cancer insurance product under dispute is not an agreement violation but conforms to limitations negotiated by Japan and the United States. More U.S. insurance companies expressed favorable views of U.S. government actions to monitor the insurance agreements than reported favorable views of enforcement efforts. As shown in figure 1, 7 of the 13 U.S. insurance companies operating in Japan, accounting for about 50 percent of U.S. premiums, reported that, overall, the U.S. government had been effective or very effective in monitoring the agreements. Four companies, representing 13 percent of U.S. premiums in Japan, believed that the U.S. government had been effective in enforcing the agreements. Four companies reported that U.S. government monitoring efforts had been as effective as ineffective. Five companies provided this neutral response regarding enforcement efforts. Most companies expressed satisfaction with U.S. government efforts concerning the insurance agreements, particularly in situations involving U.S. government interaction with U.S. industry. For example, nine companies reported that the U.S. government had sought input from industry on the status of agreement implementation to a great or very great extent. Further, seven companies stated that the U.S. government had given thorough consideration to implementation issues identified by industry to a great or very great extent. Ten companies reported that the U.S. government had represented the U.S. insurance industry in Japan generally or very adequately. Companies providing these responses represented around 40-50 percent of U.S. premiums in Japan. However, U.S. insurance companies that account for a large percentage of U.S. premiums in Japan expressed a lower level of satisfaction with other aspects of U.S. government monitoring and enforcement efforts, specifically in terms of timeliness, accuracy of information, and consistency of government policy. Six companies, which accounted for over 80 percent of U.S. premiums in Japan, reported that the U.S. government had not acted upon agreement implementation concerns in a timely manner. Further, seven companies, which also accounted for over 80 percent of U.S. premiums in Japan, reported that the information provided to them by the U.S. government on implementation had not been clear and accurate. Finally, five companies, accounting for almost 90 percent of U.S. premiums in Japan, reported that U.S. government policy actions regarding the agreements had not been consistent over time. The largest U.S. insurance company in Japan expressed a strong level of dissatisfaction with a U.S. government decision that Japan's failure to prevent certain activities of a competing firm was not violating a third sector restriction in the 1996 agreement. We obtained oral comments on a draft of this report from officials from USTR, including the General Counsel and staff from the Japan Office. USTR declined the opportunity to provide an overall assessment of the report. USTR and an official at the U.S. embassy in Tokyo provided several technical comments, which we incorporated into the report as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to interested congressional committees and to Ambassador Charlene Barshefsky, the U.S. Trade Representative; the Honorable William M. Daley, Secretary of Commerce; the Honorable Madeleine K. Albright, Secretary of State; the Honorable Lawrence H. Summers, Secretary of the Treasury; the Honorable Janet Reno, Attorney General; the Honorable Lynn Bragg, Chairman of the International Trade Commission; the Honorable Jacob Lew, Director, Office of Management and Budget; and to the firms we contacted in preparing this report. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please contact me at (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix VI. We distributed a questionnaire to 13 insurance companies (5 life and 8 non-life companies) and three insurance brokers operating in Japan that are either wholly or majority U.S. owned. We obtained a 100-percent response rate to the questionnaire. The questionnaire contains four sections: (1) implementation/impact of the 1994 U.S.-Japan insurance agreement, (2) implementation/impact of the 1996 U.S.-Japan insurance agreement, (3) the combined implementation/impact of the 1994 and 1996 agreements, and (4) monitoring and enforcement of the agreements. We also administered a supplemental questionnaire that was only distributed to the 13 companies. The supplemental questionnaire asked for detailed information concerning applications companies had submitted. Some of the questions in the questionnaire only applied to non-life companies, while others only applied to life companies, and these questions are noted in the attached questionnaire. Also, brokers were asked fewer questions than the companies because some of the commitments in the agreements did not pertain to them. For each question in the following questionnaire and supplemental questionnaire, we have displayed the company responses. The broker responses are displayed in parenthesis next to company responses. This appendix presents a discussion of the results of the company questionnaires (see app. I) on the 1994 and 1996 U.S.-Japan insurance agreements. The following discussion is structured differently from our discussion of this topic in the main body of the report. The discussion in the main body of the report is structured around issues, such as implementation, impact, and concerns related to the agreements. This discussion is structured to follow the order of the questionnaire. We first discuss company responses to questions on implementation and impact of the 1994 agreement, then follow with a discussion of company responses to the 1996 agreement. We end the discussion with company views on the future impact of the agreements as well as company experiences in sales and market share over the last few years. In our discussion of company responses to questions on the 1994 agreement, where appropriate, we compare responses in our current survey to those responses to a 1996 survey we conducted on the 1994 agreement. Eleven of the 13 companies and two of the three brokers included in our current survey also responded to our 1996 survey. While our current survey section on the 1994 agreement covers the same major issues we covered in our 1996 survey, we did not ask as many detailed questions about the agreement as we did in our prior survey nor did we ask about certain commitments that had clearly been implemented prior to the creation of our 1999 survey. Finally, our discussion of survey results is supplemented with information obtained during interviews with U.S. insurance companies in Japan. In our 1999 survey, 8 of the 13 companies, representing about 90 percent of the premiums generated by U.S. companies in Japan, and two of the three brokers reported that the 1994 agreement had enhanced their ability to compete in Japan. This represents a positive change from our 1996 survey, when most companies reported that Japan had implemented the 1994 agreement to varying degrees, but the agreement had no effect on their ability to compete. However, companies reported concerns over Japan's implementation of specific commitments under the agreement. The 1994 agreement included commitments by Japan to increase transparency, deregulation, competition, and access to insurance programs of government corporations, while protecting foreign companies' shares in the third sector. Table 1 lists selected key commitments by the Japanese government under the 1994 agreement. Company views on the extent to which Japan has implemented these commitments and their impact follow the table. Our analysis of questionnaire responses indicates that most of the commitments to improve transparency and procedural protections have been met. Most companies (10 of 13) reported that they had been given meaningful and fair opportunities to share their views with Japanese officials regarding insurance laws, ordinances, and/or regulations. One company official indicated that the Financial Supervisory Agency (FSA), which assumed regulatory authority over product approval from the Ministry of Finance (MOF), encouraged greater dialogue with companies and appeared to value and respect diverse opinions. Further, 10 companies reported that they had received equal treatment in insurance industry groups. Also, 10 companies reported that the Japanese government had not required their company to coordinate its applications with other insurance providers (which may be potential competitors) and had not conditioned or delayed acceptance of their applications based on whether they had consulted with other insurance providers. Several companies, however, expressed concern over the Japanese government's commitment to publish and/or make publicly available licensing, product, and rate approval standards. Almost half of the companies (6 of 13) reported that the Japanese government had done little or nothing to meet this commitment. This result is very similar to what companies reported to us during our 1996 survey. Officials from two companies told us that the MOF and FSA were reluctant to put anything in writing with respect to approval standards. An official from another company told us that it was difficult to develop products because the rules of the product approval process were unclear. With respect to Japan's commitment to encourage Japanese advisory groups to allow foreign companies to attend group meetings when these groups are asked to provide recommendations related to insurance, four U.S. companies reported that they had attended only a few of these meetings, while another two U.S. companies reported that they had not attended any meetings. Officials from two other companies told us that the most effective way to communicate with the Japanese government was through industry associations, such as the Life Insurance Association of Japan and the Foreign Non-Life Insurance Association of Japan, rather than individually. Overall, 7 of the 13 companies, representing about 90 percent of the premiums of U.S. companies in Japan, reported that the Japanese government's actions to improve transparency and procedural protections had no effect on their ability to compete in Japan. Three companies reported that these actions had a generally positive effect. Our analysis of questionnaire responses indicates that the Japanese government has implemented many of the specific deregulatory commitments in the 1994 agreement. Four of six non-life companies reported that the Japanese government had, to a moderate or great extent, expanded the types of non-life products to which flexible rates could be applied. Eight companies submitted applications using data collected outside of Japan and were allowed to use this data. This represents twice the number of companies that reported using outside data in our 1996 survey. However, over half of the companies (7 of 13), representing about 45 percent of the premiums, reported that generally the government had done little to expedite and simplify the application review process. This result is very similar to what companies reported to us during our 1996 survey. Concerning Japan's implementation of its commitment to establish a brokerage system, two of the three brokers reported that the Japanese government's decision to recognize and license brokers had enhanced their ability to compete in Japan. However, all brokers told us that they continued to face obstacles in Japan, including a lack of price and product differentiation, restrictions on the types of products they can offer, and restrictions on the structure of their business operations. In terms of the impact of brokers on insurance companies, two companies reported that the establishment of a brokerage system had a generally positive effect on their ability to compete in Japan, while seven companies reported that this system had no effect. Overall, 9 of the 13 companies, representing about 45 percent of premiums, reported that the Japanese government's implementation of its 1994 deregulatory commitments had a positive effect on their ability to compete in Japan. Eight companies reported that the Japanese government's implementation of its deregulatory commitments had enhanced their abilities to differentiate product rates and forms. Also, five companies reported that the implementation of deregulation commitments had increased companies' abilities to distribute insurance products. These results represent a positive change over our prior survey, when most companies reported that Japan's actions had done little to enhance their abilities to differentiate product rates and forms and distribute insurance products. Six companies, representing about 80 percent of U.S. premiums, reported that the Japanese government had not taken sufficient action to avoid “radical change” in the third sector (that is, had not prevented large Japanese companies from entering into the third sector). Two U.S. insurers believed that radical change had occurred because two Japanese companies, Yasuda and Tokyo-Anshin, were operating in the third sector in a manner the U.S. insurers believed violated both agreements. Two companies, representing about 45 percent of U.S. premiums, reported that Japan had not taken sufficient action to avoid radical change in the third sector and that this inaction had a generally negative effect on their ability to compete in Japan. One company stated that Japan had taken sufficient action to avoid radical change and that this action had a very negative impact on its ability to compete. Five companies, representing about 40 percent of premiums, reported that Japan's efforts to avoid radical change had a generally positive effect. The insurance programs of government corporations are large and profitable, according to officials from two U.S. insurance companies. However, most companies reported that the insurance programs of these corporations are not fully available to them. Seven companies reported that for the most part, these corporations had not allocated shares of premiums using fair, transparent, nondiscriminatory, and competitive criteria, as required by the 1994 agreement. This result is very similar to our last survey. In our current survey, one company official stated that the formula used by the Housing Loan Corporation (the only government corporation that has disclosed its formula for allocating shares to insurance companies) to allocate premiums gave less than 5 percent of the shares to foreign companies. Furthermore, according to this company official, this government corporation gave the entire foreign share to one large U.S. company, with the expectation that the company would share the premiums with other foreign companies through reinsurance agreements. Many companies reported that Japan had not taken sufficient action to promote competition in the insurance market. Five of the 13 companies, representing about 70 percent of U.S. premiums, reported that the Japanese government had not vigorously enforced the Anti-Monopoly Act in the insurance sector. Eight companies and all three brokers reported that keiretsu practices and case agents still adversely affected them to a moderate or greater extent. Officials from two companies indicated that Japanese companies would usually not buy insurance outside of their keiretsu. However, officials from two companies and one broker believed that keiretsu groups would weaken over time. Overall, 9 of the 13 companies, representing about 90 percent of the premiums, reported that Japan's efforts to improve competition by taking antitrust actions had no effect on their ability to compete in Japan. This result is very similar to what companies reported to us in our 1996 survey. In our 1999 survey, 9 of the 13 companies, representing around 50 percent of U.S. premiums, and two of the three brokers reported that the 1996 agreement had a positive effect on their ability to compete in Japan. Companies reported that while Japan had implemented many of the commitments, some had not been fully met. The 1996 agreement listed several deregulation commitments for the primary sector. In addition, the agreement listed other commitments that restrict entry into the third sector by subsidiaries of large Japanese companies. The agreement clarified when these restrictions could be lifted by explicitly linking them to the implementation of five primary sector deregulation commitments. The agreement states that these restrictions will be lifted 2.5 years after the five primary sector commitments have been implemented. Under the 1996 agreement, the Japanese government also made a commitment to take steps to increase the number of staff who process insurance applications. Table 2 lists selected key commitments by the Japanese government under the 1996 agreement. Company views on the extent to which Japan has implemented these commitments and their impact follow the table. Our analysis of questionnaire responses indicates that for the most part Japan has implemented its deregulatory commitments and these commitments are having a positive effect. For example, the three non-life companies who submitted applications to offer automobile insurance through the direct response system (for example, marketing over the telephone) reported that these applications have been approved. One of these three companies reported that this method of offering insurance had a very positive effect on its ability to compete in Japan, while the other two companies reported no effect. An official from another company noted that the approval of direct marketing of automobile insurance should help toward gaining the approval of direct marketing for other insurance products. Of the five primary sector deregulatory commitments that serve as criteria for lifting restrictions on the entry into the third sector by subsidiaries of large Japanese companies, four of these apply only to non-life companies. All of the non-life companies expressing an opinion reported that Japan had implemented three of these four commitments (that is, approval of differentiated automobile insurance applications, further liberalization of commercial fire insurance, and elimination of the obligation to use rating organization rates). One non-life insurer reported that Japan's commitment to expand the notification system had not been implemented, while all other non-life insurers reported that this commitment had been met. These eight non-life companies had mixed views on the extent to which these deregulatory actions affected their ability to compete in Japan. One of the three non-life insurers that had obtained approval to offer differentiated automobile insurance reported that this had a very positive effect on its ability to compete in Japan. Two non-life companies viewed the liberalization of commercial fire rates as generally positive, with one company official indicating that the liberalization was producing discounts of up to 30 percent. However, four of the six non-life companies that offered commercial fire insurance reported that this liberalization had no effect on their company's ability to compete in Japan. Officials from two companies stated that the threshold—the minimum insured amount above which flexible rates could be applied—was still too high. An official from one of these companies stated that the keiretsu ties controlled which insurer provided commercial fire insurance for large corporations. Four of the six non-life companies that offered products under the notification system viewed the system as having no effect on their ability to compete in Japan, while two companies viewed the system as having a positive effect. Three companies reported that Japan's reform of rating organizations had a generally positive effect on their ability to compete, while four reported that Japan's effort had no effect or a generally negative effect. One company reported that the elimination of the obligation to use rating organization rates gave it greater discretion over setting premium rates. Another official indicated that his company left the rating organization because it was no longer required to be a member. The fifth commitment that serves as a criterion for lifting restrictions in the third sector applies to all insurers. This commitment requires that applications for differentiated products or rates be approved within the standard 90-day processing period. Seven of the 13 companies, representing about 60 percent of U.S. premiums, reported that Japan had implemented this commitment. However, five companies, representing about one-third of U.S. premiums, reported that Japan had not met this commitment. About half the companies (6 of 13) reported Japan's approval of applications for differentiated products or rates within the standard 90-day processing period had a positive effect on their ability to compete in Japan. We asked companies to provide us with information on the number of applications they had submitted since the 1996 agreement was signed. Companies reported that 422 of the 466 applications they had submitted since the 1996 agreement was signed had been approved and 44 were still pending. No companies reported that any applications had been rejected. Companies also reported that 21 of the 422 approved applications, or 5 percent, were approved more than 90 days after submission, as shown in figure 2. This does not necessarily mean that the Japanese government was not in compliance with the standard 90-day processing period. This is because the FSA may suspend the 90-day period under some circumstances. The 21 applications that took longer than 90 days to approve were submitted by three companies. Fifteen of the 21 applications were for applications to sell new-to-market products or to sell through a new distribution channel, as shown in figure 3. The remaining 6 of 21 applications were for revising company-exclusive product forms or rates. The applications that have been approved to sell standard products or to revise standard products or rates were all approved within 90 days. In summary, regarding the five commitments that serve as criteria for lifting third sector restrictions, five companies, representing about one-third of U.S. premiums, reported that Japan had not complied with the commitment to approve applications for differentiated products within a 90-day period. One of these companies also reported that Japan had not complied with its commitment to expand the notification system. In addition to asking companies to report on the effect of the individual deregulatory commitments, we also asked companies to report on the overall effect of deregulatory actions taken by Japan on their ability to compete. Seven of the 13 companies, representing about 45 percent of U.S. premiums, and two of the three brokers reported that the Japanese government's implementation of its deregulatory commitments under the 1996 agreement had enhanced their ability to compete in Japan. Four of the companies and one broker reported that the Japanese government's implementation of its deregulatory commitments had no effect, while one company reported that the Japanese government's implementation of these commitments had a generally negative effect. In the non-life area of the third sector, restrictions on sales by Japanese subsidiaries were set forth in the agreements primarily to protect the existing sales networks of foreign insurers for personal accident insurance. For five of the eight non-life companies expressing an opinion, all reported that Japan had met most of the these commitments. However, not one company (of those expressing an opinion) reported that Japan had prohibited the sales of personal accident insurance to association members. Overall, three of the non-life companies, representing a majority of the non-life premiums, reported that Japan's implementation of restrictions on sales by Japanese subsidiaries had a generally positive effect. In the life area of the third sector, Japan committed to prevent Japanese subsidiaries from selling stand-alone medical and stand-alone cancer insurance, but allowed for the sale of these products as riders to an underlying base policy if the rider-to-base-policy ratio was within prescribed limits. Two U.S. life insurance companies reported that Japan had not prevented Japanese subsidiaries from selling stand-alone medical and stand-alone cancer insurance. One of these companies reported that the Japanese government had failed to prevent Yasuda, a large Japanese company, from selling stand-alone cancer insurance through its relationship with INA Himawari. The other company reported that another Japanese insurer, Tokyo-Anshin, was effectively selling stand-alone cancer insurance even though the company offers it as a rider to a base life insurance policy. These two companies reported that the Japanese government's inability to prevent Japanese companies from selling stand-alone cancer insurance had a negative effect on their ability to compete in Japan. The Japanese government committed under the 1996 agreement to take steps to increase the number of staff who process insurance applications. Ten of the 13 companies reported that Japan had decreased the level of staff responsible for insurance product approval, while the remaining three companies reported that Japan had maintained the same level of staffing. An FSA official told us that the agency had nine individuals responsible for processing insurance applications. Officials from seven companies told us that this staffing level was too small to handle the volume of insurance applications. Five company officials told us that they had difficulty in arranging a meeting with the FSA, and two of these officials indicated that once they had secured a meeting, they were given little time to discuss their applications with agency officials. One company official believed his company could only submit applications twice a year because of the FSA's staffing level. Two company officials expressed concern over the ability of the FSA to meet the standard 90-day period for product approval, given the expected increases in the volume of applications. In soliciting company views on the future effects of the agreements, we chose a 2- and 5-year time period to obtain company views both before and after Japan intends to lift the third sector restrictions in January 2001. Eleven of the 13 companies, representing about 50 percent of U.S. premiums, and one of the three brokers reported that over the next 2 years, the agreements would have a very or generally positive effect on their ability to compete in Japan, as shown in figure 4. Two companies told us that they reported positively because of Japan's commitment to restrict the entry by large Japanese companies into the third sector over the next 2 years. However, over the next 5 years, a smaller number of companies reported a positive outcome, as 7 of the 13 companies, representing about 25 percent of U.S. premiums, reported that the agreements would have a positive effect. Brokers were more positive over the next 5 years, as all three reported that the agreements would have a positive effect over this time period. Two companies told us that once the third sector was opened to large Japanese companies, the third sector business of these U.S. companies would suffer. Most of the U.S. insurance companies with sales in Japan in fiscal year 1997 or earlier reported that their sales and market shares in the primary and third sectors had increased since the 1994 agreement was signed. Specifically, eight companies realized increases in their primary sector sales, and six realized increases in primary sector market share, as shown in figure 5. Two of the eight companies that reported increases in primary sector sales attributed the increases to actions taken by Japan under the agreements. In the third sector, eight companies realized increases in third sector sales, and six realized increases in market share, as shown in figure 6. Five of the eight companies that reported increases in third sector sales attributed the increases to actions taken by Japan under the agreements. USTR is the lead U.S. trade agency, with primary responsibility for monitoring and enforcing the U.S.-Japan insurance agreements. This appendix reports on the process and information USTR used in reaching key decisions regarding Japan's implementation of the agreements, as well as current U.S. government and industry positions on outstanding issues. Two of these decisions were reached on July 1, 1998, and decisions to drop or raise certain issues in the third sector have since been made. In some instances, Japanese, foreign, and U.S. industry groups and U.S. companies have expressed opinions that run counter to USTR's current position on specific implementation issues. After consulting with industry sources, USTR released an assessment on July 1, 1998, of Japan's implementation of five key primary sector deregulation measures contained in the 1996 agreement. USTR stated that while Japan had met three of these measures, it had failed to fully implement the two remaining commitments. USTR had identified problems in two areas: (1) unjustified delays in approving applications for differentiated products and rates within the standard processing period of 90 days and (2) inadequate reform of rating organizations. Therefore, USTR announced that it did not support initiating a 2.5-year countdown to open the third sector in 2001. In contrast, Japanese officials have stated that Japan has fully implemented the five deregulation measures, and on July 1, 1998, Japan initiated the 2.5-year countdown. Also, on July 1, 1998, USTR notified Japan that by allowing a Japanese insurance company (Tokyo-Anshin) to sell a cancer insurance product, Japan had circumvented the 1994 and 1996 agreements' terms that effectively reserved the third sector market for foreign and small and medium-sized Japanese firms. Japan responded that the agreement permits this particular cancer insurance product to be sold since it conforms to limitations negotiated by Japan and the United States. USTR has also reviewed other possible third sector violations, in one case determining that there was no violation, and in another, choosing to raise the issue with Japan. USTR has not revised its July 1998 assessment of Japan's compliance with the insurance agreements. In an April 1999 meeting with Japanese officials, USTR repeated its position that Japan has not complied with two outstanding deregulatory requirements (90-day product approval and rating organization reform). Additionally, USTR said that Japan continues to allow the ongoing violation of the third sector provisions of the agreements. USTR has not undertaken any formal legal actions concerning the agreements, but the U.S. Trade Representative has noted that the United States can take action against Japan through World Trade Organization (WTO) dispute settlement procedures, if necessary, to secure U.S. rights under the insurance agreements. These actions are possible now that Japan has included many of its insurance commitments in the recently implemented WTO financial services agreement. To reach its July 1998 decision that Japan had not fully complied with all the five deregulation criteria, USTR relied on information it solicited from industry, both in the United States and in Japan, as well as information gathered by the U.S. embassy in Tokyo. The embassy works closely with some U.S. companies in its data collection. However, some firms are not in contact with the U.S. embassy. In addition, USTR consulted with other agencies. The decision was preceded by a series of bilateral consultations between the governments to review Japan's implementation of the five commitments. One large U.S. firm in Japan provided key information to USTR about Japan's implementation of the primary sector deregulation criteria and possible third sector violations. In addition to soliciting the concerns of individual U.S. insurance companies, USTR also received information from two industry groups: the American Chamber of Commerce in Japan (ACCJ) and the American Council of Life Insurance (ACLI). In May 1998, the ACCJ insurance subcommittee informed USTR that it believed Japan was not in compliance with the two primary sector deregulation criteria previously mentioned, a position supported by eight members of the subcommittee and opposed by one. (Four U.S. firms were not participants in the May 1998 ACCJ decision.) In April 1998, ACLI provided an analysis of Japan's implementation, which voiced positions similar to those of the ACCJ and expressed additional concerns that Yasuda, a Japanese insurer, and INA, its U.S. partner, were causing “radical change” in the third sector. USTR feels that such industry information is critical for purposes of identifying private sector concerns. However, USTR recognizes that there are certain limitations associated with relying on information from industry associations. No one trade association represents all U.S. insurance companies, and for those represented, association positions may not capture all company views on agreement implementation. Groups such as ACCJ do not encompass all company views, as some companies do not belong or do not actively participate. Participating companies reported that there are divisions among ACCJ insurance subcommittee members and cited instances where ACCJ position papers have not reflected their company's views. Of five primary sector deregulation criteria in the 1996 agreement, USTR concluded on July 1, 1998, that Japan has implemented three of them. USTR found that Japan has not complied with the criterion to approve differentiated products within 90 days and that fundamental reform of rating organizations was incomplete. Our fieldwork conducted in Tokyo in March 1999 found that U.S. insurance companies had mixed views regarding Japan's implementation of the five criteria. Four of the five primary sector deregulation criteria apply only to products of non-life insurers. USTR has stated that Japan has implemented three of these four insurance deregulation criteria. These were requirements to (1) approve applications within 90 days for “differentiated” auto insurance, which allows the insurer the flexibility to develop, price, and market automobile insurance based on risk factors, such as the age, gender, and driving history of the driver and the use and type of vehicle; (2) further liberalize commercial fire insurance by decreasing limits for using an “advisory rate system,” which gives insurers the freedom to set rates outside the rates established by the Property Casualty Insurance Rating Organization; and (3) expand the application of Japan's “notification system,” whereby an insurance company, after filing its product plan with the regulatory authority, can begin to market an insurance product after 90 days, unless disapproved by the government, to a list of additional products and allow marketing of those products within 90 days. According to U.S. government officials, USTR's assessment of compliance was based on the insurance industry's views. The 1996 agreement requires that Japan approve applications for differentiated life and non-life products or rates within a standard processing period of 90 days. In 1998, one U.S. company raised concerns with USTR that Japan was not in compliance with this requirement. On July 1, 1998, USTR determined that Japan had not fully implemented its obligations in this area and noted that in a number of specific cases, Japan had “unjustifiably exceeded the standard 90-day processing period.” According to USTR, the criterion's reference to a “standard 90-day processing period” recognizes that the period can be exceeded in specific circumstances. In reaching its July 1998 decision, USTR sought examples from industry on numerous occasions of applications whose processing exceeded 90 days so it could raise this issue with Japan. One U.S. firm provided USTR with time lines for four applications whose processing time exceeded 90 days; USTR told us that it had never examined the actual applications. Based on the information provided by this provider, USTR believed that these applications were unacceptably delayed by the Japanese government. Following June 1998 consultations with USTR, Japan responded that, per the terms of the 1996 agreement, no applications for differentiated products (other than differentiated automobile insurance) had been received 90 days prior to the July 1, 1998, deadline and thus the commitment was considered met. USTR rejected this reasoning as a misinterpretation of the agreement. In addition, Japan consistently maintained that it processed applications within the standard period of 90 days. According to Japan, under its regulations, the standard 90-day period could be suspended if the agency responsible for processing applications requires a company to revise or supplement information on an application and that the four USTR examples had experienced delays due to such inadequacies. USTR officials acknowledged that the 90-day period can be effectively extended for this purpose but found that they were unable to respond to Japan's claims that the delays were justified, since USTR did not have permission from the insurance provider to discuss application details. USTR officials told us that they do not possess the technical ability to evaluate the applications' content. Before the April 1999 consultations with Japan, about four companies reported to USTR that they had had recent good experiences with Japanese product approval; among those companies were two that had previously complained about the application process. USTR officials were not convinced that these experiences represented a systemic improvement. In April 1999, USTR again cited Japan for continued failure to fully implement the 90-day processing period requirement, offering several new examples from the company that had provided cases to USTR for the July 1998 decision of applications whose processing exceeded 90 days. As before, USTR reviewed the time lines with the company but not the actual applications. Japan responded that the approval of applications in excess of 90 days is permitted under Japanese regulations. For these cases, Japan maintained that the applications were delayed due to sloppiness and errors. According to USTR officials, USTR was not given permission by the company to reveal its identity to the Japanese, and thus USTR was unable to engage in detailed discussions with Japan regarding suspension of the 90-day period and whether the suspensions were justified in these cases. Also related to the product approval process, several industry participants that we interviewed in March 1999 reported that the transfer of product approval authority from MOF to the newly created FSA resulted in a reduction in the number of insurance product examiners. This, in turn, resulted in a more understaffed office, with overworked employees, who, according to U.S. insurers, may be unable to process applications in a timely fashion. The Foreign Non-Life Insurance Association reported that while its members have not complained about Japan's failure to meet the 90-day commitment, they have faced difficulties in meeting with FSA officials to submit applications. The FSA agreed that it has few insurance staff but notes that this staff would increase from 9 to 11 in the new fiscal year 1999 government budget. One of the primary sector deregulation criteria in the 1996 insurance agreement that applies only to non-life products required Japan to implement “the necessary legal changes to eliminate obligations for members of rating organizations to use rates calculated by rating organizations.” There are two rating organizations in Japan that non-life insurance companies may belong to—one for auto insurance, the other for additional types of property/casualty insurance. Historically, rating organizations collected claims and expense data from member firms and computed premium rates that were approved by the government. Rating organization members were required to use the approved rate, unless the Minister of Finance approved a deviation based on the firm's circumstance. The result was considerable uniformity in insurance policies and rates for major non-life insurance products. At the time of U.S.-Japan consultations in June 1998, the necessary legal changes to meet the deregulation criterion were pending. While all necessary legal reforms were made by July 1, and the U.S. government was aware that rating organization members were no longer required to use rating organization rates, USTR concluded in its July 1, 1998, statement that “fundamental reform” of rating organizations was incomplete. USTR stated that certain aspects of rating organization reform, such as the continued collection of expense data and the collection of data for additional insurance products, promote anticompetitive activities among companies and therefore the rating organization criterion has not been met. The two specific issues raised by USTR are not mentioned in the bilateral agreements. USTR most recently raised its concerns about Japan's incomplete compliance in April 1999 meetings with Japanese officials. Japanese officials responded that the 1996 agreement only required that the use of rating organization rates not be mandatory, a commitment that has been met. One of USTR's outstanding concerns about Japanese rating organizations involves the scope of cost data that such organizations can collect from member firms. Specifically, USTR opposes the continued collection of expense data from member firms, believing it limits competition and promotes price uniformity. As part of rating organization reform that took effect in July 1998, the Japan Fair Trade Commission imposed restrictions on what kind of expense data the rating organizations could collect from member firms on a voluntary basis. This restriction was to ensure that the full rate, which member firms had previously been required to use in establishing company rates, could no longer be computed by these firms. However, according to USTR, the collection of partial expense data on a voluntary basis would still enable firms to set prices in a way that would lead to cartel-like, or uniform, practices. Several U.S. insurance providers that we interviewed in March 1999 agreed with USTR's overall position that fundamental reform has not yet occurred. However, both Japanese rating organizations, as well as the Japanese government and the Foreign Non-Life Insurance Association, reported to us that now, after the reforms, the rating organizations can only collect partial expense data on a voluntary basis and, therefore, the data held by these organizations is incomplete and does not provide a basis to establish an industry rate. One rating organization reported that since the July 1998 reforms, it now collects one-tenth of the data it formerly collected, while the other organization said it is unsure of the accuracy or value of the expense data, since the data is incomplete in scope. Further, since all companies choose whether or not to participate in the system, the completeness of the data cannot be assumed. The Foreign Non-Life Insurance Association questioned the statistical validity of the data because not all firms participate and less data is collected. One large U.S. non-life company characterized the collected data as “useless.” Further, Japanese officials have stated that rating organizations in the United States collect and publish complete expense data from companies and do so for more product lines. Finally, one U.S. firm told us that it welcomed the potential for its competitors to price uniformly, since it could price beneath the uniform price and gain market share. Also, the Foreign Non-Life Insurance Association noted that small firms need data, including expense data, to function, since their sales volume is not large enough to be a statistically sound sample from which to forecast costs and derive rates. Another issue raised by USTR about rating organization reform concerned the scope of business the rating organizations covered. Specifically, USTR opposes the expansion of rating organization authority to collect data for additional products such as nursing care and medical insurance. USTR views such expansion as being inconsistent with Japan's objective of achieving fundamental reform. ACCJ and the Foreign Non-Life Insurance Association had expressed concern over this expansion prior to the July 1, 1998, announcement. However, the Foreign Non-Life Insurance Association reversed its position before July 1, 1998, and now supports this expansion of data collection. One U.S. insurance company we interviewed said that it would like rating organizations to expand the number of product lines for which they collect data. According to one Japanese rating organization, the collection of data serves to encourage new entrants and promote competition, a position agreed to by the Insurance Services Office, a U.S. supplier of insurance information. The Japanese rating organization further suggested that data for more product lines are available in the United States. In response to our January 1999 survey and March 1999 fieldwork, U.S. companies offered mixed views regarding implementation of the five primary sector deregulation criteria. In interviews with our staff in Tokyo during March 1999, representatives of four insurance providers voiced their support for USTR's position on primary sector deregulation. One company attributed recent Japanese progress in deregulating the insurance market to USTR's aggressively pushing insurance issues. Representatives of five other providers volunteered in interviews that Japan had complied with the agreements' deregulation commitments. One company said that U.S. criticism of Japan's insurance reform efforts can undermine the efforts of Japanese officials pushing for broad financial sector deregulation. USTR has contended that Japan is violating the third sector protections of the 1996 insurance agreement. On July 1, 1998, USTR stated its concerns with Japan's licensing of a cancer hospitalization insurance rider to Tokyo- Anshin, the life subsidiary of a large Japanese non-life insurance company. The 1996 agreement stated that life subsidiaries of non-life insurance providers will not be allowed to sell stand-alone cancer insurance. Japanese subsidiaries may sell cancer insurance as a rider to a life insurance policy provided that cancer benefit payments are limited to a specific percentage of life insurance benefit payments, as set forth in a September 1996 memorandum between the two governments. USTR's analysis concluded that, based on the Tokyo-Anshin insurance policy's design and marketing, the rider was clearly intended to circumvent third sector protections. According to USTR, the rider was essentially a “stand-alone” product, equivalent to cancer policies prohibited for sale by Japanese life subsidiaries under the 1996 agreement. USTR first raised the issue beginning in January 1998 after two U.S. companies raised concerns about the rider. The government of Japan responded that the rider conformed exactly to the limitations established in the September 1996 memorandum with USTR that defined permitted cancer riders. According to Japan, because this cancer rider is only sold in conjunction with a life insurance policy, it cannot be considered a “stand- alone” product. USTR took its July 1998 position on the basis of information provided by one U.S. life insurance company and the ACCJ insurance subcommittee. However, according to the two other U.S. life firms selling cancer insurance interviewed by us in March 1999, the Tokyo-Anshin rider is in compliance with the agreement and is not a third sector violation. Of these two companies, the one that is an ACCJ member chose not to oppose the position taken by the ACCJ insurance subcommittee expressing concern on this issue but thinks USTR lacks a basis to pursue the issue with Japan. USTR continues to raise this issue with Japan. While USTR has not undertaken any formal legal action it has underscored its position that Japan not approve similar riders for other Japanese insurers. In 1997 and 1998, USTR reviewed the activities of one U.S. life insurance company, INA, and its Japanese partner, Yasuda Fire and Marine. These activities had been identified by competing U.S. insurers as a violation of the third sector provisions of the 1994 and 1996 agreements. USTR, in consultation with other U.S. agencies, determined in August 1998 that the activities were not a violation of the agreement. (See app. IV for further details.) USTR continues to review allegations of another third sector violation that was brought to its attention by industry. Specifically, during 1998, one U.S. insurance company lobbied the U.S. government to stop plans by a Japanese company to discount personal accident insurance offered to members of an association of small- and medium-sized businesses. According to the U.S. firm and the ACCJ, the discount deviated from past practice and constituted “radical change.” USTR asked the government of Japan not to allow the introduction of the discounting prior to consultations with U.S. government officials. Japan did not agree to this approach. The U.S. government conducted a review of the U.S. company's concerns and found that only this company supported the ACCJ finding. USTR continues to raise this issue with Japan but has not determined that the sales represent a third sector violation. The U.S. government negotiated the 1994 and 1996 insurance agreements with the knowledge that a U.S. insurer, CIGNA Corporation, was considering selling a majority interest in its life insurance subsidiary in Japan (INA) to a large Japanese insurer, Yasuda Fire and Marine. Following completion of negotiation of the 1996 U.S.-Japan insurance agreement but before the agreement was signed, USTR and the Japanese government created a separate document, referred to as a “minute,” that was intended to provide a limited exception to the agreement. According to USTR negotiators, this exception, which proved difficult to negotiate, was meant to allow CIGNA, per a business agreement reached in 1993, to sell a majority interest in INA (which has third sector business) to Yasuda and then allow the Japanese-owned INA to continue to have a limited level of third sector life sales. Sales of third sector life “niche” products, such as cancer and medical insurance, by subsidiaries of large Japanese insurance companies, were expressly prohibited in the 1996 agreement. According to USTR officials, the “minute” would also prevent other large Japanese non- life insurers from similarly entering the third sector. During subsequent discussions, the two governments never reached agreement concerning the precise meaning of the “minute” and how it could be implemented. Further, views differ between USTR and two U.S. insurance companies regarding the extent to which USTR provided details of this exception to industry at the time it was negotiated. The document's actual impact on the third sector sales of a Yasuda-owned INA has never been tested, since the majority sale has not taken place. Our observations on certain aspects of the “minute” are included at the end of this appendix. Concerns of large U.S. insurers regarding U.S. government actions related to this sale continued beyond creation of the “minute” and involved (1) U.S. government discussions with the Japanese government during the fall of 1997 that, while not opposing Japan's approval of the sale, expressed concern over whether the ongoing third sector activities of Yasuda and INA met the terms of the agreements and the “minute”; (2) USTR discussions with Japanese officials regarding Japan's December 1997 decision to include the 1996 U.S.-Japan insurance agreement in the WTO financial services agreement and what this development meant for the proposed majority sale of INA and its subsequent third sector sales; and (3) two 1998 U.S. government interagency reviews of the third sector activities of Yasuda and INA that determined that no agreement violations had occurred. The most active parties during these events have been the largest U.S. insurance companies operating in Japan (AIG, AFLAC, and CIGNA) and the Office of the U.S. Trade Representative. In response to your request for details regarding the extent and nature of U.S. government actions related to the proposed sale of INA to Yasuda and subsequent related events, we are providing the following information. In 1993, Yasuda Fire and Marine, a large Japanese non-life insurance company, purchased a 10-percent interest in INA Life Insurance Company, a subsidiary of CIGNA Corporation, a U.S. company. This deal also provided for the possibility of the future sale of an additional 50 percent of INA to Yasuda. In 1996, Yasuda announced its intention to acquire a majority interest in INA from CIGNA. (See fig. 7 for a time line of events from 1993 to 1996 regarding the Yasuda-INA deal.) USTR was aware of this possible majority sale of INA to Yasuda before the 1994 agreement was negotiated. The language of the 1994 agreement that committed Japan to avoiding “radical change” in the third sector by large Japanese insurers was negotiated by U.S. officials because INA had sales in the third sector. This language was agreed to by CIGNA and AIG, the company expressing concern over the possible sale at the time. USTR officials believed that this language would provide flexibility for CIGNA to pursue a profitable business strategy while still protecting the U.S. presence in the third sector from increased competition from large Japanese insurers. The U.S. and Japanese negotiators never defined the term “radical change” in the 1994 agreement. By late 1995, the U.S. insurance industry was expressing strong concerns over implementation of Japan's Insurance Business Law. Revisions to this law, the first major changes in 50 years, would for the first time allow life insurance companies to enter the non-life insurance business through a non-life subsidiary, and, similarly, for non-life insurance companies to enter the life insurance business through a life subsidiary. Although the 1994 agreement restricted the entry of Japanese companies into the third sector, U.S. officials were concerned that Japan would allow these subsidiaries to move rapidly into the third sector. As a result of these concerns, bilateral negotiations on insurance began and would continue for a year- until December 1996. In August 1996, Yasuda formally announced its intention to purchase a majority interest in INA from CIGNA. According to Yasuda, this strengthened relationship was intended to improve INA Life's distribution network and serve as Yasuda's means for achieving entry into the life insurance market (through the acquisition, rather than the establishment, of a life insurance subsidiary). Press reports noted that this sale could provide Yasuda entry into Japan's third sector life insurance market. CIGNA came to USTR in May 1996 to discuss its intention to sell a majority interest in INA to Yasuda and projected sales in the third sector for the resulting company. CIGNA requested that this transaction and the business of the Yasuda-owned company not be compromised during the ongoing negotiations or through any resulting new bilateral agreement. In an effort to maintain a united industry position, USTR asked CIGNA not to press the issue at that point and noted that the situation should be handled close to the completion of the negotiations. According to a former USTR official, CIGNA did not contact USTR again on this issue during the negotiations, even though USTR was in frequent contact with the company regarding the content of the agreement and had shared drafts of the agreement with CIGNA (as well as other U.S. companies). This negotiator noted that USTR assumed that CIGNA had worked out an arrangement with the Japanese Ministry of Finance (MOF) on its own. Therefore, the U.S. government did not include any text to address CIGNA's specific interests in the agreement. This former USTR official noted that as negotiations were concluding, USTR was focused on primary sector deregulation and other (third sector) commitments in the draft agreement. A USTR official stated that the Japanese government never raised the issue of the sale of INA to Yasuda during the negotiations. CIGNA's failure to pursue the issue with USTR as negotiations neared completion, as well as USTR's failure to address the Yasuda/INA situation during the negotiations, were oversights by both parties, in the view of former negotiators. Negotiations on a new insurance agreement were concluded on December 15, 1996, though the agreement was not signed until December 24, 1996. “f the Japanese were to interpret INA Life as a ‘life subsidiary of a non-life insurance company' when Yasuda acquired a majority interest, then it would prohibit INA Life from selling medical or cancer insurance until the year 2001. This would have a severe adverse impact on INA Life given its current product and marketing mix and its long-term strategic direction.” At that point, CIGNA proposed that technical language be inserted into the 1996 agreement that would exclude INA, even if majority Japanese owned, from coming under the definition of a “life subsidiary of a non-life insurance company.” CIGNA compared this approach to the exemption requested and received by UNUM (another U.S. insurance company operating in Japan) with respect to group long-term disability insurance and income indemnity insurance in the 1996 agreement. USTR also received letters from Members of Congress expressing support for the exemption for INA from the life third sector restraints of the 1996 agreement. USTR took action to address CIGNA's concerns, given that the majority sale had been planned prior to the 1994 agreement and the agency needed to maintain unified U.S. industry support for the as yet unsigned 1996 agreement. USTR officials were reluctant to go back to the Japanese government, which was being criticized in the Japanese press as a victim of U.S. pressure in agreeing to the terms of the 1996 agreement, and asking for additional commitments. Further, one of these officials stated that USTR did not want to reopen the agreement out of concern that the Japanese government would then also want to reopen other issues, thus possibly leading to the unraveling of the agreement. USTR negotiators believed that a separate document was necessary. USTR immediately contacted the Japanese Ministry of Foreign Affairs (MOFA) and initiated new discussions. USTR negotiated with the Japanese government from December 18 to 21, 1996. USTR requested a “grandfather” clause to allow the sale of INA to go through but also proposed restricting INA's activities in the third sector, once the company was owned by Yasuda, to avoid “radical change.” USTR was the only U.S. agency involved in these discussions. MOFA was the lead Japanese agency and consulted Ministry of Finance officials as necessary. Negotiations over the “minute” proved difficult. The Japanese government was reluctant to make any accommodation for the United States beyond those embodied in the then-pending 1996 agreement. Moreover, there was a concern that a specific commitment regarding the CIGNA-Yasuda transaction could be viewed as singling out one large Japanese insurer for special, favorable treatment in the third sector. Under these circumstances, the Japanese government sought to keep any understanding reached regarding the transaction and subsequent third sector activities by a Yasuda-controlled INA as informal as possible. For their part, USTR negotiators reported that they would have preferred, and attempted to obtain, a more formal document than the “minute,” but that their paramount concern was the substance, not the form, of the understanding. At the same time, USTR negotiators understood the sensitivity of the matter for the Japanese government. No explicit agreement was reached between the two sides during the negotiations regarding precisely how, or to what extent, the Japanese government would restrict INA's activities in the third sector following consummation of the sale. In particular, the two sides did not agree on the question of whether the Japanese government had legal authority through the use of its licensing powers to restrict INA's post-transaction activities in the third sector. However, USTR negotiators felt that the references in the “minute” recommitting Japan to avoid “radical change” in the third sector and to making necessary modifications to INA's post-transaction licenses, meant that Japan had committed to keeping INA's third sector business activities very limited. Further, based on past experience, USTR officials felt that Japan could use both formal and informal means to limit INA's third sector activities. While no other U.S. government agencies were involved in negotiating the document, a copy of the draft “minute” was faxed to the U.S. embassy in Tokyo, and the National Economic Council (NEC) was reportedly aware of its existence. Two Members of Congress who had requested that USTR facilitate the transaction also received copies of the documented exception, according to one of the negotiators. This former USTR official does not know if key congressional committees ever received the document, which, if they did not, he described as an oversight on the part of USTR. The exact text of the document is reproduced in figure 8. Current and former USTR officials stated that the final document, the so- called “minute,” was intended to ensure that (1) the 1996 U.S.-Japan insurance agreement would not prevent CIGNA from carrying out its preexisting business plan to sell a majority interest in INA to Yasuda, (2) INA would continue to have only a very limited presence in the third sector if the transaction went forward, and (3) other large Japanese non- life insurers would be prevented from similarly entering the third sector. There was no attempt during the “minute” negotiations to specify what might constitute a limited presence or radical change. Further, USTR officials have noted that there have never been discussions between the U.S. and Japanese governments to define limited presence or radical change regarding INA's post-sale, third sector activities. U.S. and Japanese officials have disagreed over how the “minute” could be implemented. Based on experience with the Japanese government and the knowledge that Japan could use formal or informal means to affect company behavior, USTR officials felt confident that Japan could exert a level of control over INA's third sector activities by modifying INA's licenses or other means, once it is majority owned by Yasuda. From the time of the “minute” negotiations in December 1996 until July 1998, Japanese officials emphasized that they had no legal authority to impose restrictions on acquired subsidiaries lawfully operating in the third sector. However, after July 1998, Japanese officials said that as a result of legislative changes that went into effect at that time (discussed later), a Yasuda-owned INA would not be allowed to operate in the third sector at all. USTR does not accept this position and has stated that it expects Japan to abide by the terms of the “minute.” In addition, the enforceability of the “minute” is perceived differently by the two governments. USTR officials stated that the “minute” is a fully negotiated and enforceable document and characterized it as a mutual understanding between governments. They have also noted that implementation of the “minute” is integral to Japan's compliance with the insurance agreements. In contrast, a MOFA official told us that the “minute” is in “no way” part of the 1996 agreement. Instead, this official characterized the document as a “non-paper memorandum for negotiators.” One MOFA official told a U.S. embassy representative that the “minute” does not have the same status as the bilateral agreement and that Japan does not want to be held by it. According to a former negotiator, USTR was in frequent contact with a senior CIGNA official during negotiation of the “minute.” This official was shown drafts of the document in order to verify factual information included in the “minute.” (See fig. 9 for a time line of events from 1996 to 1997 regarding the Yasuda-INA deal.) A former USTR negotiator stated that CIGNA knew what USTR was trying to accomplish in negotiating the “minute” (including allowing the sale but restricting Yasuda's post- acquisition third sector activities in order to avoid “radical change”). According to CIGNA's outside counsel, on December 24, 1996, the day the insurance agreement was signed, CIGNA was informed by USTR that Japan had agreed to language that stated that INA would be permitted to maintain its licenses and product approvals after the purchase of majority ownership by Yasuda. Further, the deal was viewed as unique by both governments because it predated the 1994 insurance agreement. CIGNA outside counsel was shown a draft version of the “minute” in January 1997. This version of the “minute,” like the final version, mentioned “necessary license modifications” but did not specifically address the level of third sector activity permitted by the Yasuda-owned company. Nevertheless, according to CIGNA's legal counsel, CIGNA was satisfied that this arrangement would meet its needs. Moreover, CIGNA was not concerned about the level of formality or the enforceability of the document. Around December 21, 1996, USTR officials contacted AFLAC and AIG, INA's primary U.S. competitors in the life third sector, regarding the situation with CIGNA, INA, and Yasuda. U.S. government and industry officials characterized these discussions very differently. According to USTR notes taken during the discussion with AFLAC, a USTR negotiator told a company official that USTR needed to ensure that “the deal can go forward, and it is not a precedent for other deals.” USTR informed AIG that a problem had arisen with Yasuda/INA and USTR had to find a way to deal with it. USTR needed to make an adjustment as this issue threatened the recently concluded agreement of supplementary measures. Former and current USTR officials stated that they informed the two companies that an “accommodation” was necessary for INA and Yasuda, though no mention of the existence of a document was made. USTR officials did not explicitly convey their intention to AIG or AFLAC that the “accommodation” would allow for limited third sector sales by Yasuda once it acquired a majority interest in INA. However, according to these officials, AFLAC and AIG understood that the accommodation would allow for the majority sale and limited third sector activities for the subsequent company. USTR officials stated that neither company raised objections during their communications with USTR (though AIG expressed some unhappiness). In contrast, AFLAC stated that “based on prior discussions with USTR and the Japanese government, and on restrictions in the 1996 agreement, AFLAC did not oppose CIGNA's sale of a controlling interest in INA to Yasuda. But USTR did not discuss, nor did AFLAC agree to, a special carve- out for INA's continued or expanded operations in the third sector after a takeover by Yasuda.” An AIG official also noted that AIG did not understand at that point that an accommodation had been reached with Japan that would allow for some level of third sector sales once INA was majority owned by Yasuda. The “minute” document itself was not shown to companies other than CIGNA until October 1997, when AIG and AFLAC were raising concerns with USTR over Yasuda's and INA's increasing third sector activities (discussed later). According to USTR officials, no company representatives ever asked USTR about the existence of this document until that time. These officials noted that, when AIG and AFLAC inquired at a meeting in late October as to whether there was an agreement with Japan concerning the sale of INA to Yasuda, they did not respond in the affirmative or negative, but instead, after the meeting, conferred with a senior USTR official. A few days later, USTR called both companies to the agency and, at separate meetings, presented them both with copies of the “minute” document. AFLAC's and USTR's portrayals of how the existence of the “minute” document was disclosed differ. According to an AFLAC official, USTR repeatedly denied the existence of this written agreement before October 1997 when questioned by the company. However, according to a USTR official, agency officials never denied the existence of the “minute.” In addition, a U.S. embassy official also reported that he was asked about the “minute” twice before it was publicly acknowledged by USTR. The embassy did not provide any information to the companies and later asked USTR for guidance on how to respond to such inquiries. According to this official, he was told to refer companies to USTR on this issue. Two former USTR officials who were involved in negotiating the “minute” have since stated that, in their judgment, the document should have immediately been fully disclosed to industry. While AIG and AFLAC did not raise objections in December 1996 when USTR informed them of the “accommodation” for the sale of INA to Yasuda, they reacted negatively upon learning of the existence of the “minute.” An AFLAC official has noted that the 1996 agreement states that no large Japanese insurer will sell stand-alone cancer or stand-alone medical insurance prior to 2.5 years after primary sector deregulation; in his view, this prohibition should include INA once it is majority owned by Yasuda, a large Japanese insurer. Further, an AIG official has written that “regrettably, USTR saw fit in late 1996 to allow an exception , which has the effect of allowing a U.S. company to divest itself in Japan, thus reducing the overall U.S. market penetration while jeopardizing the integrity of the entire agreement.” In late September 1997, as a result of urgent concerns on the part of AIG and AFLAC, an official from the U.S. embassy in Tokyo met with Japanese government officials from the Ministries of Finance and Foreign Affairs, at USTR's instruction, to discuss a recent expansion of third sector activities by INA and Yasuda. This U.S. embassy official emphasized that while the two governments had reached an understanding (the “minute”) to allow Yasuda to move forward with its plans to acquire a controlling interest in INA, the understanding also contained a commitment to constrain the growth of INA's third sector business so as to avoid “radical change.” The U.S. embassy representative informed Japanese officials that INA's third sector licenses must be modified in order to achieve this commitment. The U.S. government had concerns that Yasuda and INA were acting in a manner inconsistent with the agreements' restrictions on avoiding “radical change” by greatly expanding the marketing of INA products by Yasuda sales agents before the majority acquisition. This U.S. embassy official expressed concerns to Japanese officials that Yasuda had more than doubled the number of agents selling INA products in a 1-year period and, as a result, INA was rapidly increasing its third sector sales. This change was characterized to Japan as “historically unprecedented” and “resulting in a serious loss of business for U.S. firms in the third sector.” He emphasized the U.S. belief that the bilateral insurance agreement compelled MOF to limit the growth of INA's third sector business and agents to historical trends and roll back the past year's dramatic increase in INA's force of Yasuda agents. Japanese officials responded that INA's activities had nothing to do with the agreements. They stated that the agreements' provisions apply to Japanese, not U.S., subsidiaries and INA is majority-owned by a U.S. company. They noted that Yasuda owned only 10 percent of INA and any market developments reflected the independent operations of INA. These officials also emphasized that there was no basis under Japanese law to restrict the license of a company operating properly under law and regulation, and, further, an agent rollback would be impossible. After U.S. embassy meetings with the Japanese government, CIGNA's outside counsel expressed concern to USTR that the U.S. government's recent communication with MOF had threatened the majority sale. CIGNA believed, as a result of its discussions with MOF, that the U.S. government would only support MOF approval of the sale if Yasuda and INA were to be restricted from selling any third sector products and reduce the number of Yasuda agents at INA to the number at the end of the previous fiscal year. CIGNA requested that USTR rectify the situation by sending a letter to MOF supporting the sale without conditions or modification of licenses. USTR met with CIGNA's outside counsel and explained that USTR did not oppose the transaction but had concerns about Yasuda's third sector activities. Again, USTR noted that, while it was still looking into the facts, Yasuda's current activities might violate the terms of the insurance agreement. USTR pointed out to CIGNA that while what might constitute “radical change” was not precisely defined, the threshold was not very high- particularly when activities by a large Japanese insurance company might result in a direct loss of business for U.S. firms. USTR eventually concluded that sending a letter to MOF would be counterproductive based on concerns that the letter might be misinterpreted. During this period, while USTR was communicating frequently with CIGNA regarding the majority sale of INA to Yasuda and subsequent third sector activities, USTR received congressional letters of support for the transaction, as well as letters claiming that Yasuda was violating the agreements and should not be allowed to sell third sector products after the sale. In October of 1997, a senior USTR official traveled to Japan for 2 days of meetings with Japanese officials and certain U.S. companies to discuss the activities of INA and Yasuda. This official emphasized to Japanese officials that (1) the transaction should be allowed to go forward, (2) INA's licenses should be modified as necessary, (3) this is the only exception to the agreement, and (4) Yasuda's actions both before and after the acquisition should not be permitted to result in radical change in the third sector. He noted that there was evidence suggesting that Yasuda was controlling INA and might be causing radical change. Japanese officials again responded that Yasuda's and INA's activities before the acquisition cannot constitute an agreement violation since INA is a U.S. company. Furthermore, these officials said that Japan could not impose legally enforceable restrictions (such as license modifications) upon the activities of INA just because it is acquired by Yasuda. However, Japanese officials also suggested that, recognizing the agreement's spirit, Yasuda was likely to act on its own initiative to keep INA's activities in the third sector within a certain limit. CIGNA correspondence with USTR shows that the company was unhappy with USTR's visit to Japan, believing that a link had been made with Japanese officials that the transaction should not be approved unless Yasuda's current activities were restricted. CIGNA also expressed concern that USTR was discussing its private business decisions with its competitors. The U.S. and Japanese governments had additional discussions in late 1997 regarding the majority sale and subsequent third sector activities of INA once it was owned by Yasuda. No agreement was ever reached as to how or whether the third sector sales of INA could be restricted. During the WTO financial services negotiations, the U.S. government requested that Japan include the 1996 bilateral insurance agreement in its WTO commitments. The U.S. government held this position (1) in order to seek third country support for full implementation of the agreement, (2) to have access to WTO dispute settlement procedures, and (3) to respond to U.S. industry support for this initiative. In December 1997, the Japanese government agreed to include most of the provisions in the 1996 agreement in the WTO financial services agreement, including third sector provisions. (See fig. 10 for a time line of events from 1997 to the present regarding the Yasuda-INA deal.) Japan's legislation that implements its WTO financial services commitments authorizes MOF to prohibit entry into the third sector by acquired, as well as newly established, subsidiaries. A MOFA official confirmed to us that Japan's implementing legislation and its referral to established as well as acquired subsidiaries implied that if Yasuda were to acquire INA, INA would be considered a life insurance subsidiary of a non-life insurance company subject to the third sector sales prohibition in the 1996 agreement. Therefore, while Japanese officials had said, before implementation of Japan's WTO financial services commitments, that they were unable to use legal means to regulate third sector activities even following Yasuda's purchase of the company, Japan has now implemented its WTO insurance commitments and has expressed a view that INA's sales in the third sector, post transaction, would be completely prohibited. In February 1998, MOF notified insurers to explain that Japan's commitments under the WTO financial services agreement prohibit sales of third sector products by acquired life subsidiaries of non-life insurance providers. One day later, Yasuda announced that it would delay its majority purchase of INA until agreement restrictions are lifted on sales of third sector products by life subsidiaries of non-life insurance companies. In later communication with USTR, CIGNA did not preclude the possibility that the sale still might go through before third sector restrictions are lifted. Therefore, in June 1998, the month before Japan's WTO insurance commitments were implemented on July 1, USTR engaged in discussions with Japanese officials regarding the consistency of Japan's implementing legislation with the intent of the “minute.” Specifically, USTR sought reassurance that INA could continue third sector sales if acquired by Yasuda. USTR officials claim that Japanese officials responded in a noncommittal fashion and never provided an answer. USTR officials have emphasized to Japan that it has an obligation to uphold the “minute,” which allows for the majority sale of INA to Yasuda, and to limit third sector activity for the new entity, regardless of Japan's WTO insurance commitments. In early 1998, USTR began a review of the ongoing activities of Yasuda and INA to determine whether they were consistent with the third sector restrictions in the 1994 and 1996 agreements. Two companies, AFLAC and AIG, had contended that Yasuda, through its partnership with INA, had entered the third sector and caused radical change to that sector in contravention of the agreements. USTR provided CIGNA, AFLAC, and AIG with an opportunity to present their views in writing. AFLAC and AIG argued that Yasuda had effectively entered the third sector through receipt of financial benefits it had obtained in connection with its business relationship with INA. They also argued that because of its relationship with Yasuda, INA was a de facto Japanese company and that its third sector activities violated the agreements' restriction on these activities by Japanese companies. Finally, the U.S. companies argued that changes to INA's corporate structure and business operations constituted radical change and should therefore not have been permitted. During this review, the U.S. Trade Representative expressed a reluctance to choose sides among U.S. companies and a hope that the companies could cooperate to find a mutually agreeable business solution. However, such a solution never materialized. Therefore, USTR examined each of the allegations and, as summarized in a classified memorandum, concluded that the activities of Yasuda and INA did not constitute a violation of the agreements. After conducting an analysis of INA's operations, USTR's fundamental position was that INA is a U.S. company and, therefore, its activities do not fall within the terms of the 1994 and 1996 agreements. This decision was agreed upon during interagency meetings that reached the subcabinet (NEC Deputies) level and included officials from the Departments of State, Commerce, the Treasury, and Justice; as well as the NEC and USTR. On July 1, 1998, USTR communicated the consensus decision to CIGNA, AIG, and AFLAC. In response to a request by AFLAC and a few Members of Congress, an additional interagency review was subsequently conducted in late July and early August 1998. This final review reached the level of the Cabinet (NEC Principals), whose review had participation from the Council of Economic Advisers; the Office of Management and Budget; the National Security Council; NEC; the Departments of Commerce, Justice, Labor, State, and the Treasury; and USTR. During this second review, all three companies presented their arguments orally to the interagency group. The original conclusion- that information provided to date did not support a determination that the activities of INA and Yasuda in the third sector had violated the 1996 agreement- was reaffirmed. During the second interagency review, which reached a consensus decision that there was no violation of the “radical change” provisions of the 1996 agreement, the Department of Commerce recommended that additional measures be taken to monitor the situation. A Commerce official proposed that an interagency team conduct further work in Tokyo to verify the facts presented to the U.S. government. According to USTR, this suggestion was not adopted based on the general interagency view that no further information was necessary to resolve the issue. In discussing how USTR's views evolved from raising serious concerns regarding a possible violation of the agreements with Japan in late 1997 to a final determination that no violation of third sector provisions had occurred, USTR officials noted that AIG and AFLAC expressed concerns over Yasuda and INA activities in an extremely urgent manner in 1997. As the companies emphasized that they were losing business as a result of these activities, USTR felt compelled to address the issue with the Japanese government immediately. However, over the next several months, as USTR was able to conduct its own analysis of the situation, it ultimately determined that no violation had occurred. Both AFLAC and CIGNA raised concerns about the process used by USTR to conduct the formal review of Yasuda's and INA's activities in the third sector. AFLAC expressed frustration over USTR's requests for updated information on the situation after the agency did not act on information provided by AFLAC months earlier. CIGNA felt that it never received a complete explanation from USTR as to what accusations had been made against the company, but was compelled to respond to allegations made against it nonetheless in an attempt to defend itself. AIG and AFLAC disagreed with the interagency decision. However, officials from one company have also noted that Yasuda's activities in the third sector have slowed. Specifically, these officials have stated that the rapid growth in the Yasuda agent force selling INA products has ended and their company's existing client base is no longer being actively threatened. INA's principal U.S. third sector competitor believes that the government of Japan has been successful in restraining Yasuda's activities through the use of “soft controls,” such as requiring a slowdown in the projected registration of Yasuda agents with INA in the company's business plans. This company has also noted that the impact of Yasuda's and INA's activities on its business has been small to date. Neither AFLAC nor AIG is currently pressing this issue with the U.S. government. We have observations in the following three areas regarding the “minute”: (1) the difficulties USTR faced in creating the “minute,” (2) the consequences of USTR's lack of complete communication with industry regarding the limited exception, and (3) the problems USTR encountered due to the use of undefined terms in the text of the “minute.” Because the issue of the majority sale of INA to Yasuda and the resulting company's allowable third life sector activities were not addressed during the course of the 1996 insurance agreement negotiations, USTR was put in a difficult position. After the agreement negotiations were concluded, USTR felt compelled to preserve CIGNA's support for the agreement by accommodating the company's business plans that predated negotiation of both insurance agreements but that would clearly violate the 1996 agreement's terms if not addressed by the two governments. This situation was made more delicate due to the fact that competing U.S. companies had opposing and strong views as to whether or how Yasuda/INA should be allowed to sell third sector life products. In deciding to accommodate CIGNA's sale of INA to Yasuda and subsequent third sector life sales by the company, USTR took a position that appeared to benefit one U.S. firm at the expense of others. USTR faced the difficult challenge of determining the U.S. interest in a case where U.S. companies' interests were opposed. Moreover, given the sensitive issues the “minute” raised in Japan, USTR officials believed that broad dissemination of the document might lead to its disavowal and possibly to the unraveling of the 1996 agreement itself. USTR therefore sought to limit distribution of the “minute” and thus did not provide copies to the two other U.S. insurance companies that had an interest in developments related to Yasuda/INA. Further, in late December 1996, USTR did not explicitly describe to AIG and AFLAC the extent to which a Yasuda-owned INA would be allowed access to the third sector life insurance business. This “grandfather” document added to the 1996 agreement, combined with USTR's incomplete description of the exception and the failure of USTR to provide the actual document to industry, created frustration with USTR on the part of U.S. insurers that lasted for months. Further, the “minute” used undefined terms that made its meaning and implementation uncertain. While USTR officials maintained that a Yasuda- owned INA would only be allowed restricted access to the third sector, it is unclear what language or provision in the “minute” requires that the company maintain only a limited presence. As a result of this undefined language in the “minute,” the U.S. and Japanese governments had numerous consultations during 1997 regarding the meaning of the document's terms. U.S. and Japanese government officials have expressed very different understandings of the “minute,” with Japan's actions suggesting an unwillingness, even an inability under Japanese law, to implement the document as intended by USTR. After several months of discussions, the two governments were never able to reach an agreement as to how Yasuda might be restricted in the third life sector, demonstrating the questionable value of the “minute” in creating a limited exception to the 1996 agreement to accommodate CIGNA. The Chairman of the House Subcommittee on Trade, Committee on Ways and Means, asked us to examine (1) the views of U.S. insurance companies operating in Japan regarding the agreements' implementation and impact on their ability to compete in the Japanese market; (2) the roles and efforts of the Office of the U.S. Trade Representative and the Departments of Commerce, State, and the Treasury in monitoring and enforcing the agreements, and U.S. government views on whether Japan has met its commitments under the agreements; and (3) U.S. insurance industry views on U.S. government monitoring and enforcement efforts. We also collected information addressing U.S. government actions related to one U.S. insurer and its Japanese partner. To obtain the views of U.S. insurance companies regarding the agreements' implementation and impact on their ability to compete in Japan, we distributed a questionnaire to all 13 U.S. insurers and three brokers in Japan that are either wholly or majority U.S. owned. Surveys for life and non-life insurers differed somewhat depending on whether a particular commitment applied to them, and the survey included far fewer questions for brokers as several of the commitments in the agreements do not directly pertain to them. The survey was distributed in January 1999, and we obtained a 100-percent response rate to our questionnaire. We then traveled to Japan and met with representatives from all the insurers and brokers in March to obtain detailed explanations of and clarifications to their questionnaire responses. In some cases, responses were revised during discussions at our meetings. The questionnaire asked U.S. insurers and brokers for their views on the implementation and the impact of those provisions of the agreements for which the companies would have first- hand experience. All of the questions were referenced back to their related provisions in the agreements. For the questions related to the 1994 agreement, we developed, where possible, similar or identical questions to those we used in a 1996 survey on the implementation and impact of the 1994 agreement. This allowed us in some cases to compare how company responses had changed over time. Eleven of the 13 companies and two of the three brokers included in our current survey also responded to our 1996 survey. In analyzing questionnaire results, we examined response frequencies. We also computed the percentage of U.S. insurance sales in Japan represented by company responses. In requesting company participation in our survey, we pledged that company responses would be reported in aggregate form and that we would not identify specific responses with the individual companies. In certain cases, the reporting of responses in conjunction with the percentage of U.S. insurance premiums in Japan associated with that response limits this confidentiality. In those cases, the firms that could be identified, due to their large size, gave us permission to report the market premium data. We also interviewed and collected information from industry groups and insurance companies in the United States. To identify the roles and efforts of USTR and the Departments of Commerce, State, and the Treasury in monitoring and enforcing the insurance agreements, as well as U.S. government views on implementation, we conducted interviews with officials from each agency, including the U.S. embassy in Tokyo. We reviewed available information from USTR and the U.S. embassy in Tokyo to establish the nature and frequency of interagency interaction. We also assessed extensive documentation from USTR and the U.S. embassy in Tokyo to review USTR's determination regarding the status of agreement implementation and discussed USTR's determination with U.S. companies and Japanese government agencies and industry groups. Information on Japanese law in this report does not reflect our independent legal analysis but is based on interviews and secondary sources. We also used the 1999 questionnaire to obtain the views of U.S. insurance companies regarding U.S. government monitoring and enforcement of the agreements. All 13 insurance companies and three brokers were asked questions regarding overall U.S. government monitoring and enforcement efforts, as well as questions related to their specific experiences with various government agencies. As with implementation and impact questionnaire responses, we conducted follow-up interviews in Japan with U.S. participants in the Japanese market. We also held interviews with industry groups and insurance companies in the United States. We examined extensive documentation regarding monitoring and enforcement actions by USTR and the U.S. embassy in Tokyo that have proven controversial with some U.S. insurers operating in Japan. We performed our review from July 1998 to June 1999 in accordance with generally accepted government auditing standards. In addition to those named above, Emil Friberg, José Peña, Kay Halpern, Kim Frankena, Richard Burkard, Kathleen Joyce, and Rona H. Mendelsohn made key contributions to this report. U.S.-Japan Trade: U.S. Company Views on the Implementation of the 1994 Insurance Agreement (GAO/NSIAD/GGD-97-64BR, Dec. 20, 1996). U.S.-Japan Trade: The Japanese Insurance Market (GAO/NSIAD-99-108BR, Mar. 15, 1999). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the implementation and monitoring of the U.S.-Japan insurance agreements, focusing on: (1) the views of U.S. insurance companies operating in Japan regarding the agreements' implementation and impact on their ability to compete in the Japanese market; (2) the roles and efforts of the Office of the U.S. Trade Representative (USTR) and the Departments of Commerce, State, and the Treasury in monitoring and enforcing the agreements, and U.S. government views on whether Japan has met its commitments under the agreements; and (3) U.S. insurance industry views on U.S. government monitoring and enforcement efforts. GAO noted that: (1) GAO's 1999 survey of the 13 U.S. insurance companies and 3 brokers in Japan revealed that all but 2 think that Japan has made moderate or better progress overall in implementing the 1994 and 1996 insurance agreements; (2) GAO's analysis of survey results shows that Japan has met most of its transparency (openness), procedural protection, and deregulation commitments; (3) overall, most U.S. companies reported that the agreements have had a positive effect on their ability to compete in Japan; (4) nevertheless, almost half the companies expressed concerns over Japan's implementation of key commitments such as expediting approval of insurance products and rates and limiting the activities of large Japanese companies in the specialized third sector; (5) USTR is the principal agency responsible for monitoring and enforcing the insurance agreements, with assistance primarily from the U.S. embassy in Tokyo; (6) USTR also receives assistance from Commerce and State, with a lesser level of assistance by the Departments of the Treasury and Justice; (7) USTR and U.S. embassy monitoring efforts include obtaining information on the agreements' implementation from industry groups and individual U.S. insurance companies, as well as consulting with the Japanese government; (8) in conducting their monitoring and enforcement work, U.S. government officials have noted Japanese progress in implementing the agreements; (9) however, they have also identified a few issues, which are similar to those cited by some U.S. companies, where they believe Japan has not fully met its commitments; (10) Japan, on the other hand, believes that it has fully implemented both agreements; (11) more U.S. insurance companies expressed favorable views of U.S. government actions to monitor the insurance agreements than reported favorable views of enforcement efforts; (12) about half (7 of 13) of all U.S. insurers and 2 of the 3 brokers GAO surveyed reported that U.S. government efforts to monitor agreements have been effective; (13) with regard to enforcement, about one-third of the companies and no brokers reported that U.S. government efforts have been effective; (14) around one-third of the companies reported that U.S. government monitoring and enforcement efforts have been as effective as ineffective; and (15) three major U.S. insurers expressed concerns over U.S. government monitoring and enforcement efforts concerning the protection of various U.S. company interests in the third sector.
In 1978, the Congress deregulated the airline industry, phasing out the federal government’s control over domestic fares and routes served and allowing market forces to determine the price, quantity, and quality of service. Most legacy carriers, free to determine their own routes, developed “hub-and-spoke” networks. These carriers provide nonstop service to many spoke cities from their hubs. The airports in the small spoke communities include the smallest airports in the nation’s commercial air system. Depending on the size of those markets (i.e., the number of passengers flying nonstop between the hub and the spoke community), the legacy airlines may operate their own large jets or use regional affiliate carriers to provide service, usually with regional jet or turboprop aircraft. (See fig. 1 for an example of a turboprop aircraft.) However, low-cost carriers, such as Southwest Airlines and JetBlue Airways, use a different model, flying point-to-point generally to and from secondary airports in or near major metropolitan areas, such as Ontario International near Los Angeles and Chicago Midway. The nation’s commercial airports are categorized into four main groups based on the annual number of passenger enplanements—large hubs, medium hubs, small hubs, and nonhubs. The 30 large hubs and 37 medium hub airports together enplaned the vast majority—89 percent—of the almost 703 million U.S. passengers in 2004, the most recent data available. In contrast, the 69 small hubs enplaned about 8 percent, and the 374 nonhub airports enplaned only 3 percent of U.S. passengers. Air service to nonhub airports has generally declined in recent years, as measured by the number of departure flights. As shown in figure 2, nonhubs have had an overall decrease in departures since July 2000. While all airports showed a decrease in service from July 2001 to July 2003, scheduled departures at small, medium, and large hub airports have increased since 2003. By July 2005, scheduled departures at small, medium, and large hub airports largely rebounded, with departures from large and small hubs exceeding the July 2000 number. However, the decline of service at nonhub airports continued, with 17 percent fewer departure flights serving these airports in July 2005 compared with July 2000. While small hubs and nonhubs are eligible to apply for Small Community Air Service Development grants, the nonhub airports have been the main beneficiaries of the program. As of fiscal year 2005, only 6 percent of the airports receiving grants have been small hubs. This decline in air service to small communities is particularly prevalent at small community airports that are near larger airports. Passengers sometimes drive or take other modes of transportation to neighboring larger airports to take advantage of more frequent flights and lower fares, a phenomenon called leakage. Appendix II provides more information on the factors that have influenced the reduction of passenger traffic and air service at the nation’s small community airports. We have previously reported on the decline of air service to small communities noting the challenges these communities face in obtaining or retaining commercial passenger air service. These challenges include the lack of demand, inability to operate profitable air service, and competition from neighboring larger hub airports. Also, according to an aviation consultant, these factors, plus network carrier financial difficulties and changes in aircraft usage, have negatively affected nonhubs. Two programs have been established to help address air service to small communities—the Essential Air Service program and the Small Community Air Service Development Pilot Program. The Congress established the Essential Air Service program as part of the Airline Deregulation Act of 1978. In general, the program guarantees that communities that received air service prior to deregulation will continue to receive air service. If an air carrier could not continue service to a community without incurring a loss, DOT (and before its sunset, the Civil Aeronautics Board) could then use Essential Air Service program funds to award a subsidy to that carrier or another carrier willing to provide service. These subsidies are intended to cover the difference between a carrier’s projected revenues and expenses, and include a 5 percent profit margin. Our prior work on the Essential Air Service program found, in part, that financial incentives may offer the best opportunity for communities to attract the new or additional service but that it may be difficult to bring about service that can be sustained after the incentives end. More recently, the Congress authorized the Small Community Air Service Development Pilot Program as part of the Wendell H. Ford Aviation Investment and Reform Act for the 21st Century, P.L. 106-181 (AIR-21), to help small communities enhance their air service. AIR-21 authorized the program for fiscal years 2002 and 2003. The Vision 100-Century of Aviation Reauthorization Act, P.L. 108-176 (Vision 100), reauthorized the program for an additional 5 years, through fiscal year 2008, and eliminated the “pilot” status of the program. While Vision 100 increased the annual authorization amount to $35 million, the Congress has appropriated $20 million for the program each year from 2002 through 2005, for a total of $80 million. No funds were appropriated for the first year of the program, 2001. Under this program, DOT is authorized to award grants to up to 40 communities served by small hub or nonhub airports (as classified in 1997) that have demonstrated air service deficiencies or higher-than-average airfares. The Office of Aviation Analysis in DOT’s Office of the Secretary is responsible for administering the program. The grants may be made to a single community or to a consortium of communities, although no more than four grants each year may be in the same state. Consortiums are considered one applicant for the purpose of this program. Some relatively large airports qualify for this program. For example, Buffalo Niagara International Airport in Buffalo, NY, and Norfolk International Airport in Norfolk, VA, are eligible for the program, enplaning over 2.2 million and over 1.8 million passengers in 2004, respectively. In contrast, small nonhub airports such as the airports in Kake, AK, with about 2,500 enplanements, or Owensboro, KY, with about 2,800 enplanements, are also eligible. The program is available in the 50 states, District of Columbia, Puerto Rico, and U.S. territories and possessions. The statute also directs DOT to designate one of the grant recipients each year as an Air Service Development Zone and work closely with the designated community on ways to attract business to the areas surrounding the airport and to develop land use options for the area. There are no additional funds associated with this designation, and no special benefit or preference is to be given to communities seeking this designation in receiving a grant under the program. Communities apply for this designation through the regular grant application process. DOT has not issued separate regulations for the Small Community Air Service Development Program. Instead, DOT issues an order every year that requests applications and provides guidance for the proper format and content of the applications. The authorizing legislation provides that if funds are used to subsidize air service, the subsidy cannot last more than 3 years. However, the time needed to obtain the service is not included in the subsidy time limit. While the legislation does not limit the period for expenditure of funds on non-subsidy projects, DOT’s fiscal year 2005 order indicates that in general, grant funds should be expended within 3 years of the award. As shown in figure 3, DOT’s awards have been geographically spread covering all states except Delaware, Hawaii, Maryland, New Jersey, and Rhode Island. To date, no communities in Delaware or Rhode Island have applied for a grant. Appendix IV contains information on all grants awarded as of September 30, 2005. In the first 4 years of the Small Community Air Service Development Program, DOT awarded a total of 157 grants. In 2002, the first year the program was funded, DOT received 179 grant applications, but this number has been declining and was at a low of 84 applications by 2005. DOT officials believe this decline is natural as the program matures; many airports are currently implementing grants and others now understand DOT’s expectation of local matching funds. DOT evaluates the applications according to legislatively established priority factors and other criteria. DOT first considers five priority factors specified in the laws and then considers numerous other factors in a second tier review of the projects. Certain legislative factors, such as whether a local community can demonstrate support by contributing some local matching funds, or DOT factors such as whether an airport has received a grant in the past, were major considerations in award decisions. In our survey of airport directors, we found that airports that received grants generally were positive about DOT’s process for awarding grants. However, only about one-third of the airports we surveyed that applied for but did not receive a grant expressed satisfaction over the clarity of selection criteria. DOT’s oversight of projects relies largely on reviews of reimbursement documents and required grantee quarterly reports; it does not perform on-site monitoring visits. DOT monitoring has been sufficient to identify cases where grant funds have not been utilized and reallocated the funds to other applicants. As of September 30, 2005, 23 of the grants awarded were completed—20 for 2002, 2 for 2003, and 1 for 2004. About $12.5 million, or 62 percent of the $20 million total funds for 2002 had been expended by grantees as of September 30, 2005. DOT officials said that the newness of the program in 2002, and the need to negotiate agreements with airlines, help explain why many early grants are still ongoing. To be considered for a Small Community Air Service Development Program grant, airport communities prepare a grant proposal in response to a notice in the Federal Register. The applications should discuss, among other things, the need for additional or improved air service, the available fares at the airport, and how the grant will help communities address these situations. From 2002 through 2005, DOT has awarded 157 grants. In the first year of the program, demand was the highest, with 179 applications requesting a total of about $142.5 million in federal funding. However, from 2002 through 2005 the program has experienced about a 50 percent decline in the number of applications. (See fig. 4 for details on the number of applications, awards, and completed and terminated grants each year.) According to officials at DOT’s Office of Aviation Analysis, the downward trend in the number of applications was a natural consequence of the implementation of the program. First, many eligible airport communities have already received a grant and are still implementing their projects—as of September 30, 2005, 127 of the 157 grants were ongoing. Current grantees are not likely to reapply soon because many of the projects that were funded take time to implement, with some taking over 3 years to complete. Second, Office of Aviation Analysis officials told us that the airport community has learned that DOT expects that a local cash match should be part of the proposal and that communities must honor their committed local contribution for the proposed projects. The officials told us that some applicants did not fully appreciate this expectation during the pilot phase of the program. Finally, according to DOT officials, legislative changes in 2003 prohibited communities or consortiums from receiving more than one grant for the same project and established the timely use of funds as a priority factor for DOT to consider in awarding grants. Based on our survey, for airports that had applied for but never received a grant at the time of the survey, 58 of 81 airport directors, or about 72 percent, said that they would reapply. The remaining 23 airport directors indicated that they would not, or were unsure whether they would apply. These airport directors cited two primary reasons for not applying—the cost and effort of applying, or a belief that DOT would not fund their desired project. Finally, some eligible airports have never applied for a grant. To understand why, we contacted airport directors from a group of 20 randomly selected airports that had never applied under the program but were eligible to do so. Although this does not constitute a generalizable sample, it provides some useful information on the reasons why some communities did not apply. Among the more common reasons cited by the directors for not applying were that they did not know about the program, or they felt that the cost and effort of applying were too burdensome. Among the other reasons given by more than one airport director were the airport already had sufficient air service, officials thought the airport was not eligible, their grant application would not be competitive, or DOT would not fund the kind of project the airport would like to do. In our survey of 2002 though 2004 grantees and discussions with officials of the 10 completed projects, we found that the grantees were generally satisfied with the application process and paperwork requirements. Of the 121 grantee airport directors responding, 103 were satisfied or very satisfied with the application process. In addition, in our discussions with the directors of the 10 community airports that had completed grant projects, most were satisfied with the application process, although three expressed concern about the limited amount of time they had to complete their applications after the 2002 announcement. In our survey of grantees, this issue did not appear to be significant, especially in years subsequent to 2002. DOT has made minor modifications in the application process as it has gained experience with the program, such as allowing 90 days instead of 60 days to complete the application, and has continued to allow for flexibility in application format, according to Office of Aviation Analysis officials. The Small Community Air Service Development Program is a discretionary program that allows DOT considerable flexibility in selecting projects for financial assistance, within the basic eligibility criteria. To be eligible, the airport cannot be larger than a small hub airport based on 1997 FAA boarding data and must have insufficient air service or unreasonably high air fares. In addition to the basic eligibility criteria, DOT must give priority to projects according to five factors established in the law. These factors constitute DOT’s Office of Aviation Analysis’ first tier of project evaluation. DOT must give priority consideration to communities that (1) have air fares higher than average for all communities, (2) provide a portion of the cost of the project from local sources other than airport revenues, (3) have or will establish a public-private partnership to facilitate air carrier service to the public, (4) will provide material benefits to a broad segment of the public that has limited access to the national air transportation system, and (5) will use the assistance in a timely manner. Although a local community match from nonairport revenues enhances a community’s chance of receiving a grant, it is not required under the act. However, DOT has funded only two projects that did not contain a local cash match. In addition to the priority factors, DOT has, as part of a second tier evaluation, other “service-related” and “project-related” factors that it takes into consideration in evaluating competing proposals. (See app. III for a list of the factors used in DOT selections.) DOT uses this second tier evaluation to ensure that a project has a strong justification, and the factors themselves have changed and evolved over time, according to DOT officials. For example, as part of this second tier evaluation, DOT looked at 15 air service factors to identify whether a carrier served the airport and reviewed the airport’s existing service frequencies, destinations, aircraft size, and passenger boardings. It also examined air service in the broader geographic area, including the applicant community’s proximity to larger airports and the quality of the roads providing access to those airports. DOT also considered 26 project-related factors, which include such items as whether the area’s demographics will support the project or whether the project actually addressed the community’s air service problem. Some project-related factors can make it less likely to be selected, including whether (1) the proposal simply shifted costs from the local to the federal level, (2) the air service was in proximity to other service that would detract from the proposal, and (3) the proposal potentially worked at cross purposes with another grant if the airport is located close to a past grant recipient. DOT has developed review procedures that detail how it processes the applications that it receives and how it applies this two-tier evaluation of projects. DOT moved to a more structured process when the Congress, in December 2003, changed the status of the program, dropping the pilot designation of the program. For 2004, DOT developed more formal documentation of its assessment of how well projects met the statutory eligibility criteria and priority factors for each grant application. The DOT application evaluation reports we reviewed have shown how DOT incorporates the priority factors in its 2004 deliberations and how those results then translate into the projects it recommends to the Secretary of Transportation. Generally, applications that meet fewer of the priority considerations are less likely to be selected for grant assistance. However, priority factors are not the sole criteria in the final selection. As shown in table 1, applications that met four or five of the priority factors were not guaranteed selection. Twelve of the 35 applications that met four out of five of the priority considerations did not make the final award list, and one proposal that met all five was not selected. In contrast 13 applications that met three priority considerations were funded. Projects that meet priority factors may not be funded for a number of reasons. According to a DOT official, a project may meet the priority factors yet not have any realistic possibility of implementation or success. DOT may also choose to award a grant to a community that has never received one before awarding a second grant to another community. DOT’s review of the priority factors involves determining a yes or no response for each factor. DOT does not use a weighting or point system or other scoring system to numerically rate the projects. However, DOT officials told us that they are aware that, although in some cases a proposal may technically meet the factor, it may do so very weakly. For example, a project satisfies a priority factor if it will use nonairport revenues as part of its local contribution, no matter how small that nonairport contribution may be. On the other hand, a large non-airport contribution can be viewed as a strong indicator of community support. The final decisions on which projects are selected are thus a result of the consideration of both the priority factors and other factors that affect the quality of the proposal and its perceived chances of success. Once Office of Aviation Analysis staff have reviewed and analyzed the individual projects, the Assistant Secretary for Aviation and International Affairs reviews the staff assessments and finalizes a list of recommended projects for the Secretary of Transportation. According to Office of Aviation Analysis staff, through fiscal year 2004, the Secretary had agreed with the recommended list. In fiscal year 2005, subsequent to the meeting with the Secretary to review recommended awards, DOT made changes in the recommended grants. According to Office of Aviation Analysis staff, this was done to achieve a better balance of participating communities and a better balance in the distribution of funds. Our survey of grantee airports showed that a large majority of the directors at these airports were satisfied with DOT’s selection criteria and process for the program, while fewer nongrantee airport directors thought the selection criteria were clear. Eighty of 121 grantees responding—or 66 percent—were either satisfied or very satisfied with the clarity of the selection criteria, while only 26 of 82 nongrantee airport directors—or 32 percent—were either satisfied or very satisfied with the clarity of the selection criteria. A possible explanation for this is that while DOT has flexibility in making awards and considers many criteria in addition to the five priority factors, the ultimate selection decision is discretionary. A few of the fiscal year 2002 airport grantees we visited observed that although they were pleased they were chosen, they were not sure how grantees are selected and what criteria were used. DOT’s Office of Aviation Analysis staff are responsible for oversight of the grants and serve as contact points with grantees. For the 2005 program cycle, six staff were assigned part-time to the program, an increase from four part-time staff during the program’s first 3 years. DOT uses a document review approach to oversight in which it requires grantees to submit quarterly reports that are used to assess a project’s progress and timeliness. The agency also requires that grantees submit a final report on the project, which is used as the basis for its overall evaluation of the project and holds back 10 percent of the grant funds until the receipt of a final report. DOT operates the program on a reimbursable basis—grantees must first expend funds from their own resources for project activities and then request reimbursement from DOT for allowable expenses. To ensure that government reimbursements are proper and allowable, DOT reviews expense receipts, invoices, and other evidence of expenditures grantees submit for reimbursement and, if satisfactory, will authorize FAA to make payment. DOT and FAA maintain and monitor reimbursement information on their financial databases. Office of Aviation Analysis officials told us that they use this approach because performing on-site visits is impractical given the small number of DOT staff who administer the over 100 active grantees currently in the program. They also noted that there is no provision for administrative expenses in the appropriation, thus DOT does not have funds available for site visits. DOT monitoring has been sufficient to identify cases where grant recipients have been both successful and unsuccessful in implementing their grants. In those cases where sponsors have difficulty implementing their projects and are unable to utilize their grant awards, the grants are terminated and funds reverted back to DOT for reallocation to other applicants. From 2002 through 2004, DOT reallocated about $4.5 million to other projects. The manner in which DOT administers oversight of grantee reimbursements and provides assistance generated a favorable response from grantees. Our survey found that grantees had high levels of satisfaction with the way DOT monitored the grants and provided assistance to grantees. Specifically, 108 of 121, or 89 percent, of grantee airport directors who responded to our survey said that they were satisfied or very satisfied with DOT’s assistance. Likewise, 96 of 121, or 79 percent, of responding airport directors were satisfied or very satisfied with DOT’s monitoring or oversight activities. In general, grantees did not see the amount of paperwork required by DOT’s quarterly reporting mandate as burdensome, with 86 of 121—71 percent—of survey respondents being satisfied or very satisfied with this quarterly reporting requirement. A lower number, 58 of 119—or about half of airport respondents—said they were satisfied or very satisfied with the paperwork DOT required for reimbursement and only 5 respondents were dissatisfied or very dissatisfied. However, one airport consultant noted that for very small airports with very few full-time staff, the reimbursement requirements can be more difficult to complete. The Vision 100—Century of Aviation Reauthorization Act added a provision that DOT grant assistance will be used in a timely fashion as an additional priority consideration for selection to participate in the program as of 2004. The only limitation the authorizing legislation places on the timely expenditure of funds is that air service subsidies cannot last more than 3 years. DOT’s 2004 and 2005 grant announcements set an expectation that the funds should be used within 3 years. Although this criterion was not part of the 2002 grant process, it does provide a benchmark for performance, and 2002 grants are at the 3-year point. As of September 30, 2005, 16 of 40 fiscal year 2002 grants were still active, 20 were completed, and 4 had been terminated by DOT. About 62 percent of the $20 million total 2002 program grant allocation had been reimbursed to 2002 grantees. In addition, 58 grants are scheduled to expire in fiscal year 2006. Table 2 shows the amounts DOT reimbursed each year through September 30, 2005. (See app. IV for more detailed information about the status of specific grants.) Office of Aviation Analysis officials told us that the 2002 grants are not an indication of what has happened with the grants awarded in following years. According to the officials, a number of factors contributed to the 2002 projects being delayed. First year grants were not awarded until late fall of 2002. In addition, the airlines were at that time still recovering following September 11, which made it difficult for communities to attract new service. Many projects included revenue guarantees, which can take some time to finalize. Finally, communities may wait to ask for reimbursements after several months of expenditures, which slows the payout of federal funds. The reimbursement data indicate that the 2003 grants also experienced low reimbursements the first year. Only about 11 percent of the 2003 grant funds were reimbursed by the end of calendar 2004. Finally, it should be noted that when a project includes a revenue guarantee, the slow expenditure of funds does not always indicate a problem. Revenue guarantees are only paid out if the airline fails to meet a revenue target. If it meets the target, no funds are drawn down, which may actually be an indication of project success. For example, the $500,000 grant award to Rhinelander, WI, included almost $492,000 for a revenue guarantee. However, upon project completion, Rhinelander had used about $254,000 for the revenue guarantee. According to the airport director, the new route initiated under the grant generated more revenue for the airline during the grant period than had been expected. Therefore, the airport did not have to reimburse the airline as much as it had anticipated. As part of our survey of grantees, we asked whether their projects were proceeding on schedule, and, if not, why they were proceeding more slowly than expected. About 40 percent—42 of 106—of the grantee airport directors reported that their projects are behind schedule, including 11 of 26 airport directors surveyed who were involved in implementing grants awarded in 2002. (See table 3.) Most of these respondents, 23 of the 42, cited difficulties in entering and finalizing agreements with the airlines as the main reason for the delay. Grantees we surveyed also cited other reasons for delays, including issues with airport personnel and among the grant consortium, operational changes at Chicago O’Hare airport, and the need to coordinate the grant with the Essential Air Service program. On a case-by-case basis, DOT has approved a number of grant amendments, including extending the grant expiration date, to projects that have been slow to be implemented. As of July 26, 2005, DOT had amended a total of 47 grants, including 27 of the 2002 grants. For example, Binghamton, NY, wanted to obtain enhanced service to Washington, D.C., via United Express and Detroit, MI, via Northwest Airlink by providing the airlines with revenue guarantees. According to officials from the Office of Aviation Analysis, there was some delay because of difficulties in negotiating with the airlines. DOT agreed to extend the grant expiration date, allowing Binghamton extra time to work out agreements with United and Northwest. However, during these extended negotiations, the airlines told Binghamton that they would agree to provide the enhanced service only if the community offered subsidies rather than revenue guarantees. As a result, DOT also allowed Binghamton to amend its grant to provide the airlines with subsidies rather than revenue guarantees to better accommodate the airlines’ requirements. Another example is the grant agreement amendment DOT provided Lamar, CO. Lamar did not have any commercial service prior to its grant award. The purpose of the grant was to obtain service from Rio Grande Airlines to access scheduled service to Denver International Airport. Lamar was not successful in obtaining service from Rio Grande Airlines and instead obtained service to Denver’s Front Range Airport from Lamar Flying Service, a charter carrier. The Office of Aviation Analysis agreed to amend Lamar’s grant to allow Lamar Flying Service the time to expand its base of operations and establish dependable air transportation. Lamar subsequently provided four scheduled trips a week to Denver International Airport and has since been able to upgrade its aircraft. The Small Community Air Service Development Program allows communities to set a variety of goals for projects, and individual projects have been directed at adding flights, airlines, and destinations; lowering fares; changing the aircraft serving the community; completing a study for planning and marketing air service; increasing enplanements; and curbing the leakage of passengers to other airports. To achieve these goals, grant sponsors have used a number of strategies, commonly including subsidies and revenue guarantees to the airlines, marketing to the public and to the airlines, hiring personnel and consultants, and establishing travel banks in which a community guarantees to buy a certain number of tickets. In addition, communities have employed a number of other strategies, including buying an aircraft, subsidizing the start-up of an airline, and taking over ground station operations to reduce the costs for an airline. The outcomes of the grants may be affected by broader industry factors that are independent of the grant itself, such as larger strategic decisions on the part of the airlines. Our evaluation of completed projects indicates mixed results, but only 23 of 157 projects were completed as of September 30, 2005. While officials at 19 of the 23 airports reported improvements to air service or fares during the life of the grant, only about half said that the improvements appeared to be self-sustaining. With 127 of the 157 grants still ongoing, it is too soon to determine which specific types of strategies work best or assess the overall effectiveness of the grant program to improve air service to small communities. According to our survey of 146 airport directors that received funds from the 122 grants DOT awarded from 2002 through 2004, the most common goals associated with Small Community Air Service Development Program grants were generally related to increasing service and enplanements (see fig. 5). Recapturing passenger traffic—that is, stopping leakage to other airports—was also a frequent objective that increased in importance each year of the program. In contrast, conducting a study of the local market or changing the type of aircraft serving the community were relatively infrequent goals. By 2004, relatively few airports cited these goals for their grants. Finally, although addressing high fares is an explicit goal of the program, lowering fares was cited as an objective by 62 airport directors of the 146 airport directors over the 3-year span. Grantees engaged in a number of strategies to meet their goals, including various financial incentives, marketing, studies, and other approaches. For example, a number of different financial incentives have been funded under the program, including: Start-up subsidies—these provide assistance for an airline to begin operations or pay for an aircraft. Revenue guarantees—the community and air carrier agree on a revenue target and the community pays the carrier only if revenues from the service do not meet the target. Travel banks—businesses or individuals deposit or promise future travel funds to a carrier providing new or expanded service. A business entity may handle an account containing the travel funds, and contributing entities then draw down on this account. Airport station operations—the airport may assume the ground station operations for one or a number of carriers serving the airport. Ground personnel such as baggage handlers and ticket agents become airport employees and may be shared among the airlines. Airlines pay for these services, but their cost can be lower than if provided by the airline itself. Marketing support generally took a variety of forms, including mass media such as television, radio, magazine and newspaper advertising, outdoor advertising such as billboards and banners, direct mail, internet advertising including using the airport web site, airport special events such as open houses, frequent flyer promotions, travel agent incentives, and other approaches. Figure 6 shows an example of the use of outdoor advertising in one of the marketing projects funded by the grants. The Small Community Air Service Development Program also has funded studies and various other approaches. For example, in 2002, DOT awarded the Aleutians East Borough in Alaska a $240,000 grant to study the air service market for some rural airports in the lower Alaskan peninsula and the eastern Aleutian Islands. DOT also subsequently awarded the Aleutians East Borough $70,000 in 2003 to expand the study. Finally, other approaches have included developing alternative ground services such as bus service to nearby hubs and funding personnel such as airport economic development staff positions or consultants. We reviewed the grant applications and agreements for all 157 grants awarded from 2002 through 2005. Projects commonly include more than one strategy, such as combining a revenue guarantee with marketing for the air service provided under the grant. Over time, a few trends can be seen in the strategies used by communities. First, while marketing activities have always been heavily used as a strategy, by 2004 marketing had virtually become a universal strategy. All 46 grants—the initial 40 DOT awarded plus the 6 additional grants awarded with reallocated prior year grant funds— included marketing as a component. Second, the number of projects using direct subsidies and travel banks declined by 2004 and remained low in 2005, while the number of projects using revenue guarantees increased after 2002. Revenue guarantees have been the most common form of financial assistance each year of the program. Figure 7 provides a summary of the types of strategies communities have used under the program. Because marketing was such a heavily used strategy, we contacted all 23 airports that had completed their grants by September 30, 2005, to determine what types of marketing they actually did. We found that 22 of the 23 completed grants had included some kind of marketing component to encourage greater use of the airport or the airlines that fly there; the lone exception was a grant which funded a study only. All 22 grantees used newspaper advertising, 21 used radio advertising, and 21 used the Internet—for example, the airport Web site. Television and outdoor advertising were also common strategies, 17 grantees used television and 18 used outdoor advertising. After these strategies, the most common forms of marketing were airport special events (14 projects), magazine ads (12 projects), and direct mail (11 projects). Other types of marketing, such as frequent flyer promotions, travel agent incentives, or trade show booths, were also used in a few cases. Officials from airlines participating in the Small Community Air Service Development Program said revenue guarantees or other forms of financial subsidies were generally their preferred type of strategy, but they also considered other types of strategies proposed by communities under the program. We contacted each of the airlines associated with the 10 projects completed by January 1, 2005, including Continental Airlines, Delta Air Lines, Horizon Airlines, Rio Grande Air, TransStates Airlines, US Airways, and Westward Airways. Although their comments do not constitute a comprehensive analysis of industry views of the grant program, they provide a useful perspective on how participating airlines view the program. Several airline officials noted that reducing financial risk has become a key factor for airline and airport officials and consultants we interviewed also made this observation. Finally, airline officials said they perform their own due diligence doing market analyses of the airports, the competitive situation, and route finances regardless of what a local study says. Airlines face challenges when initiating air service to a community. Start-up costs can be significant and include repositioning equipment, renting space, and hiring and training personnel. Also, even if a viable air travel market exists in a community, entering a new market involves changing passengers’ existing travel patterns and loyalties, which may take time. Airline officials noted that given the current financial condition of the industry, airlines cannot afford to take a year of losses to build a customer base in a market, as they had in the past. For this reason, airline officials stated that they often could not enter smaller markets without some kind of revenue guarantee, such as that provided by a Small Community Air Service Development Program grant, or other financial support from the community. Airline officials emphasized that for a project to be of interest to them, the market must be potentially self-sustaining without subsidy or revenue guarantee in the longer term. The grant will eventually end and airlines do not wish to start over in another market, with the accompanying costs and risks. Airline officials also emphasized the importance of local funding to provide marketing for the new service; for some airlines, this was a crucial factor in selecting the community. A related observation by airline officials was that the level of local support and commitment to air service was a key factor in their decision to work with a local community. The Small Community Air Service Development Program has this component of local commitment, which some airline officials saw as important. In addition, some airline officials said that the overall project (grant and local match) must be sufficiently large to gain their interest. Finally, most airline officials were unfavorably disposed toward travel banks citing the difficulty in administering them and their poor track record of success. However, one airline official said they had been involved with successful travel banks and was open to the prospect of trying that strategy again. All airline officials we talked to had positive views of the Small Community Air Service Development Program. Several officials stated that the program was superior to the Essential Air Service program because it addressed markets that were potentially self-sustaining but were underserved. However, in one case, airline officials said they were concerned about communities using the program to attract low-cost carriers to compete with existing service they were already providing to the community. Office of Aviation Analysis officials noted that higher than average fares is a statutory criterion for priority consideration in the selection of grantees, so introducing a low-cost carrier into a community is an acceptable strategy for a community under the program. We contacted officials of the 23 Small Community Air Service Development Program grant projects that were complete by September 30, 2005, and compared them against the program’s goals of improved air service and found that there were mixed results. In general, we found that the airport officials reported almost all the completed projects had some positive effect on air service during the life of the grant, but in some cases the improvements did not remain after the initial grant period, or that the improvements were not self-sustaining. For most completed grants, 19 of the 23, airport officials reported some kind of improvement in service, either in terms of an added carrier, destination, flights, or change in the type of aircraft. Of the 23, 8 reported adding a new carrier, 13 a new destination, and 13 an increase in the number of flights. In addition, 13 reported that some fares had lowered at the airport during the grant. These service and fare improvements may explain the positive effect on enplanements the airport officials reported—19 grantees reported enplanements rose during the course of the grant. However, the improvements seen during the grant did not always continue afterwards. Fourteen of the 23 grantees reported that the improvements were still in place as of October 1, 2005. Further, there is the question of whether the service or fare improvement is self-sustaining and will continue without additional funding. About half the grantees with completed grants—11 of the 23 grantees—reported that the improvements they experienced as a result of the program were self-sustaining thus far. It should be noted that these outcomes are preliminary. Thirteen of these grants were completed in 2005, and determining whether a particular project is successful may depend on the timeframe used. For example, Westward Airways was able to initially provide service to Scottsbluff, NE, under the grant, but later went out of business. We also visited 10 airports that had completed grants by January 1, 2005, in order to gain a more detailed understanding of the outcomes of their projects (app. V contains discussion of each of these). Of these, five projects—Charleston, WV; Daytona Beach, FL; Hailey, ID; Lynchburg, VA; and Mobile, AL, were generally successful in achieving their goals and had made self-sustaining improvements to air service at the time of our review. Charleston was able to add a new air carrier (Continental) and destination (Houston). However, Continental subsequently reduced the number of daily flights from two to one. Charleston officials said this was a result of a larger strategic allocation of equipment by Continental, and the airline later restored this second flight to Charleston. Daytona Beach’s objective was to add service to Newark, NJ, which has remained in place after the grant was completed. After the grant was completed, Continental extended its agreement with the airport. DOT officials said that Continental has also expanded its service at the community to additional destinations. Hailey successfully added air service to Los Angeles via Horizon Airlines (see fig. 8). Although the service continues, it does not operate all year long due to the seasonal nature of demand to this resort community. After the grant expired, a local resort funded the revenue guarantee to Horizon, indicating that the service was initially not self-sustaining. However, Horizon now offers the service without a grant guarantee. In addition, the grant helped convince Horizon to add another flight to a new destination, Oakland, CA. Lynchburg, VA, was able to upgrade service to Atlanta from 30-seat turboprops to 50-seat regional jets through a revenue guarantee. The new jet service resulted in higher load factors on the larger regional jets than on the smaller turboprops due to increased demand. This service also has continued after the completion of the grant. DOT officials said that the community has also succeeded in negotiating, with its carrier, relative fare parity with the carrier’s operations with a nearby airport. Mobile, AL, established an innovative program to assume the ground operations, including baggage handling and staffing ticket counters for US Airways, which was about to abandon service to the airport, according to an airline official. US Airways has maintained its operations in Mobile, and the airport has expanded this program, with American Airlines joining the ground operations service. The four projects that did not result in self-sustaining improvements in air service were Fort Smith, AR; Reading, PA; Scottsbluff, NE; and Taos, NM. Ft. Smith provides an example of how larger events in the aviation industry can affect the outcome of the grant. Ft. Smith obtained the air service it sought under the grant, however, American Airlines’ strategic decision to reduce the number of flights at its St. Louis hub resulted in Ft. Smith losing the service. In the case of Reading, PA, the grant may have had a negative effect on air service. The grant established a bus service from Reading Airport to the Philadelphia airport, with the goal of demonstrating that air travel demand existed in Reading and service could be added to the airport. However, the bus service provided competition to the existing air carrier at Reading, which subsequently withdrew its service. The bus service ultimately failed (although a private operator has re-established bus service without subsidy), and Reading was left for a time without any scheduled air service. Scottsbluff, NE, was initially successful in resuming an intrastate air service between Scottsbluff, North Platte, Lincoln, and Omaha via start- up air carrier Westward Airways. This service did not reach the expected level of enplanements and Westward Airways, which was able to begin operations with the help of the grant, ceased operations in July 2005. Taos, NM, was not able to achieve sufficient enplanements to make its air service self-sufficient, and Rio Grande Air, the small carrier that provided the service to Taos, went bankrupt. Finally, it is too early to determine whether the $95,000 grant to Somerset, KY, may be considered a success. The purpose of the grant was to conduct a study, which has been successfully completed. However, the ultimate goal of the program and the grant is to improve or attract air service. Because the community received a second grant in 2005, it will be possible in the future to determine the ultimate outcome of the initial and subsequent grants. Until the results of Somerset’s efforts to attract service are known, it is too soon to evaluate this grant. Some of the 10 grantees we visited identified additional positive and negative indirect effects not anticipated at the time of the grant. For example, one airport cited increased community involvement as a positive outgrowth of the grant—it helped forge ties between the airport and business community that were not there before. In addition, the study performed with grant funding fostered better community understanding of the local airline market. In a few instances, services begun under the grant stimulated other air service not part of the grant such as attracting other new service or improved service by a competing carrier. Conversely, some airport officials were concerned that grants to nearby competing airports could dilute effects of the grant at their airports. An airport official and an industry consultant also expressed concern that the program was no longer producing innovative ideas. Instead, some airports were copying approaches that had been funded in the past as a way to improve their chances of receiving a grant. Because a large majority of Small Community Air Service Development Project grants are not complete (127 of the 157 grants were ongoing as of September 30, 2005), it is too soon to determine which strategies have performed the best or assess the overall effectiveness of this program to improve air service to small communities. However, in addition to the preliminary results from the projects we studied, comments from DOT officials, airport directors, and airline officials provide some indications of what strategies that had positive results. Airline officials saw projects that provide direct financial benefits to the airline, such as revenue guarantees, as having the greatest chance of success. These officials noted that these types of projects allow the airline to test the real market for air service in a community without enduring the typical financial losses that occur when new air service is introduced. Airline officials also said that marketing efforts were important for success. DOT and some airline officials doubted the effectiveness of travel banks, in part because of the difficulty with administering the program. Finally, one strategy that airport and airline officials found innovative was for airports to take over the airlines’ ground station operations, such as ticketing and baggage handling. Only two airports have used this strategy under the program, so it is too early to tell if this model will be more widely adopted. Most grantee airport directors we surveyed indicated that their projects were at least partially successful or that it is too early to make an assessment. As shown in table 4, 60 of 120 airport directors that responded said that their grant was effective or very effective in increasing passenger traffic. About 46 percent (54 of 118) of airport directors said that their grant was effective or very effective in improving service quality. However, in both instances, almost as many airport directors said that they had no basis to judge effectiveness or that the question was not applicable. In addition, 38 of 118 airport directors answered that their grant had been effective or very effective in reducing high fares. A majority, 63 airport directors, said that this issue was not applicable or they had no basis to judge. Some of the airport directors responding to our survey also said that they thought the funds used for marketing had been effective. For example, one airport director said that the small airport he directs does not have a marketing budget and that the grant funds provided for marketing were more than the airport’s total annual operating budget. The marketing funds therefore, brought public awareness the airport would not otherwise have been able to obtain. Another airport director said that he believed the marketing program conducted as part of the airport’s grant resulted in an 11 percent annual increase in enplanements. AIR-21 requires that each year DOT designate an Air Service Development Zone as part of the Small Community Air Service Development Program. The act specifies that DOT shall work with the community or consortium on means to attract business to the area surrounding the airport, to develop land use options for the area, and provide data working with the Department of Commerce and other agencies. DOT sees this designation as providing an opportunity for the selected community to work with its grant award to stimulate economic development, increase use of the airport’s facilities, and create a productive relationship between the community and the federal government to achieve these goals. DOT has designated one airport each year of the program as an Air Service Development Zone— Augusta, GA (2002); Dothan, AL (2003); Waterloo, IA (2004); and Hibbing, MN (2005). Airports may apply for the designation by indicating their interest and providing supporting information on their grant applications. Airport officials said there are no special reporting requirements nor any additional funding for airports designated Air Service Development Zones. Airport and local officials at the three locations designated in 2002 through 2004 said they did not know the criteria for being selected as an Air Service Development Zone or they were unclear on why their airports were selected. Upon selection, all three airports met with DOT staff to further clarify what the program entails. Officials from one airport said that DOT suggested the airport come up with ideas for how to use the designation, which could serve as a model for other communities. Another airport official told us that DOT offered to introduce the airport to other federal agencies as part of the designation. However another official said that other federal agencies, including FAA, do not “recognize” the designation as providing any special status for the airport. DOT officials said all of the requirements of other agencies, including DOT agencies, still apply to the airport and community. According to one local official, this makes the designation ineffective in fostering economic development. All three communities told us that the Air Service Development Zone designation has neither positive nor negative effects on the airport, because it has done nothing to either help or hurt them. The officials from all three airports noted that receiving the designation initially provided some positive local publicity for the airport, but that was the only effect they could name. Community and airport officials told us that any actual economic development that has been created at or near the airport would have occurred without the Air Service Development Zone designation. Our review of completed Small Community Air Service Development Program grants to date found that they had a mixed record of meeting program goals. The projects we reviewed included both instances where grantees were able to develop self-sustained air service and cases where this was not achieved. However, given that relatively few Small Community Air Service Development Program projects have been completed thus far (23 completed grants of the 157 awarded grants, or about 15 percent, as of September 30, 2005), it was too early for us to assess the overall effectiveness of the grants in improving air service to small communities. Examining the effectiveness of this program when more projects are complete would allow the evaluation of whether additional or improved air service was not only obtained but whether it continues after the grant support has expired. This may be particularly important since our work on the limited number of completed projects found that only about half of the grantees reported that the improvements were self-sustaining after the grant was complete. In addition, our prior work on the Essential Air Service program found that once incentives are removed, additional air service may be difficult to maintain. Over the next year, an additional 58 projects are scheduled to expire and examining the results from completed grants at that time may provide a clearer picture of the value of this program. Any improved service achieved from this program could then be weighed against the cost to achieve those gains. This information will be important as the Congress considers the reauthorization of this program in 2008. We also found that the Air Service Development Zone concept has had no identifiable effect at any of the three locations designated from 2002 through 2004. The officials at the 3 designated airports remained unclear about what they were supposed to do once designated a development zone. DOT sees this designation as providing an opportunity for the selected community to work with its grant award to stimulate economic development, increase use of the airport’s facilities, and create a productive relationship between the community and the federal government to achieve these goals. DOT officials said they are available to help the designees, if they are asked. However, DOT has not developed guidance or a conceptual model for what an Air Service Development Zone should be or what it should accomplish. Without this guidance, DOT advice or direction is limited and the designees may or may not pursue any air service development zone activities. To ensure the effectiveness of the Small Community Air Service Development Program, we are making the following two recommendations to the Secretary of Transportation: The Secretary should conduct an evaluation of the Small Community Air Service Development Program in advance of the program’s reauthorization in 2008. Such an evaluation should occur after additional grant projects are complete and include a determination of the extent to which the program is meeting its intended purpose of improving air service to small communities. The Secretary should clarify what support and services it will provide to communities that are designated as Air Service Development Zones. We provided copies of a draft of this report to the Department of Transportation for its review and comment. We received oral comments from DOT officials including the Associate Director, Office of Aviation Analysis. The officials told us that, in general, they concurred with the report’s findings and agreed to consider the recommendations as they go forward with the program. DOT also provided clarifying and technical comments, which we incorporated into this report as appropriate. We are sending copies of this report to appropriate congressional committees and the Secretary of Transportation. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding the contents of this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals who made major contributions to this report are listed in appendix VI. To determine how the Department of Transportation (DOT) has implemented the Small Community Air Service Development Grant Program, we obtained and reviewed legislation authorizing and funding the program as well as related orders and guidelines. We interviewed DOT officials regarding their grant review and selection process as well as the procedures they use to oversee and monitor grant implementation. We reviewed grant proposals and award information and information about how DOT used grant criteria to review grant applications and award grants. We reviewed program controls to understand DOT’s program oversight and monitoring. We also reviewed quarterly reports and final reports grantees submitted. We obtained and reviewed DOT financial data from the Office of the Secretary and from the Federal Aviation Administration. Based on our understanding of the data through discussions with knowledgeable agency officials, as well as checks for obvious errors in accuracy and completeness, we determined that the data were sufficiently reliable for our purposes. To determine what strategies have been used and what results have been obtained, we reviewed the grant applications and agreements for all 157 grants awarded from 2002 through 2005. We classified the types of strategies carried out within the program and summarized the types of activities funded. In addition, we conducted site visits at each of the 10 grantees that had completed their projects as of December 31, 2004. This included Charleston, WV; Daytona Beach, FL; Fort Smith, AR; Hailey, ID; Lynchburg, VA; Mobile, AL; Reading, PA; Scottsbluff, NE; Somerset, KY; and Taos, NM. We interviewed airlines associated with these completed grants to obtain information on air service trends at small community airports and the Small Community Air Service Development Program. Airlines interviewed include American Eagle Airlines, Continental Airlines, Delta Air Lines, TransStates Airlines, US Airways, Horizon Airlines, Rio Grande Air, and Westward Airways. We contacted 13 additional airports that completed their grants by September 30, 2005, to obtain basic information on the outcome of their grant. We also interviewed selected aviation consultants that had prepared grant applications to obtain information on air service trends at small community airports and the Small Community Air Service Development Program. Aviation consultants interviewed include Wilbur Smith Associates, Vesta Rae and Associates, and Intervistas. In addition, we conducted two Web-based surveys. We sent surveys to the 146 airport directors involved in the 122 grants awarded by DOT from 2002 through 2004. We sent a different survey to the 116 airport directors who applied for but did not receive a grant. For both surveys, we sent the survey to the airport directors or managers who were knowledgeable about the grant that was received or, in the case of the nongrantees, were knowledgeable about the grant proposal. To determine the airports that were included in the grant award, we reviewed the grant applications, information on the grants from DOT, and information from the grantees. To determine the airport directors who applied for but did not receive a grant, we reviewed the grant proposal documents from the DOT docket and information on the applications from DOT. We did not include airports smaller than a nonhub airport (as defined in 1997) in the nongrantee survey because they did not have scheduled commercial service. Each survey asked a combination of questions that allowed for open-ended and closed-ended responses. The survey to airports that received the grant included questions about (1) the intended goals of the project, (2) project elements, (3) assessments of DOT’s implementation of the grant program, (4) results obtained under the project, and (5) recent trends that have affected air service at the airport. The survey to airports that did not receive the grant included questions about (1) the intended goals of the project, (2) project elements, (3) assessments of DOT’s implementation of the grant program, and (4) recent trends that have affected air service at the airport. For both surveys, a GAO survey specialist designed the questionnaires in conjunction with other GAO staff knowledgeable about the grant program. In addition, we pretested the grantee questionnaire with three communities that had received fiscal year 2002 grants. We also had two aviation experts review the grantee questionnaire and provide comments. We pretested the nongrantee questionnaire with three other communities that had applied for, but did not receive, grants for each of the fiscal year 2002 through 2004 periods. During the pretests for each questionnaire, we asked whether the questions were understandable and if the information was feasible to collect. We refined each of the questionnaires as appropriate. Both surveys were conducted using self-administered electronic questionnaires posted to the World Wide Web. For the grantee survey, we sent email notifications to 146 airport managers and directors beginning on March 2, 2005. We then sent each potential respondent a unique password and username on March 8, 2005, by email to ensure that only members of the target population could participate in the survey. To encourage respondents to complete the questionnaire, we sent an email message to prompt each nonrespondent each week after the initial email message for approximately 3 weeks. We closed the survey on April 18, 2005. Because of the location and nature of the two grants awarded to the Aleutians East Borough islands in Alaska, we did not send surveys to each airport included in the grants. Instead, we asked that the legal sponsor of the grants complete a single survey for each of the two grants awarded. For those questions in the survey that specifically pertain to the airports involved in the grants, we asked that the sponsor respond for any of the airports in that grant for that specific grant year. We received 121 completed surveys, a response rate of 83 percent. To view our survey and airport directors’ responses, go to www.gao.gov/cgi-bin/getrpt?GAO-06-101SP. The nongrantee surveys were also conducted using self-administered electronic questionnaires posted to the World Wide Web. For this survey, we sent email notifications to 116 airport managers and directors beginning on April 12, 2005. We then sent each potential respondent a unique password and username on April 14, 2005, by email to ensure that only members of the target population could participate in the survey. To encourage respondents to complete the questionnaire, we sent an email message to prompt each nonrespondent each week after the initial email message for approximately 3 weeks. We closed the survey on May 18, 2005. There was an application from two airports in Hawaii. Because both airports had the same airport director, we sent him only one survey. We received 83 completed surveys, a response rate of 72 percent. We removed two airport directors from the respondent list because their airports were included in a proposal submitted by a representative of the state DOT without the airports’ knowledge. Therefore, the airport directors did not have sufficient information to complete the survey. To view our survey and airport directors’ responses, go to www.gao.gov/cgi-bin/getrpt?GAO-06-101SP. Because these were not sample surveys, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed, can introduce unwanted variability into the survey results. We took steps in the development of the questionnaires, data collection, and the data analysis to minimize these nonsampling errors. For example, social science survey specialists designed the questionnaires in collaboration with GAO staff with subject matter expertise. Then, as mentioned earlier, the draft questionnaire was pretested with appropriate officials to ensure that the questions were relevant, clearly stated, and easy to comprehend. When the data were analyzed, a second, independent analyst checked all computer programs. Since these were Web-based surveys, respondents entered their answers directly into the electronic questionnaires. This eliminated the need to have the data keyed into a database thus removing an additional source of error. We also called a random sample of 20 small hub and nonhub airport directors or managers as categorized in 1997. We selected our sample from a total of 206 small and nonhub airports we determined had never applied for a grant. We called the 20 airport directors to ask them why they had not applied. The sample was stratified by FAA region and airport size. While we did not attempt to project these results to all airports that did not apply for grants, the sample provided some useful observations on the types of reasons airports had for not applying. To determine how passenger traffic and air service have changed at the nation’s small community airports, we conducted a literature review of aviation trends, focusing on studies that describe overall trends at small community airports (small hubs and nonhubs) in terms of the number of scheduled flights and destinations, available seats on scheduled flights, and scheduled flights by aircraft type. We narrowed our criteria to analyses contained in published studies and reports in the past 5 years. We reviewed each of the studies meeting our criteria and determined that the studies were methodologically sound. As an additional assessment of the reliability of the studies’ findings, we considered the reliability of the underlying data that were used in the studies and reports. Where noted in the study, we considered the steps that the study authors took to determine if the data used in their analyses were sufficiently reliable for their purposes. For example, much of the published data are from DOT’s Office of the Inspector General who periodically reports to the Congress on small community air service. The Inspector General’s reports on aviation trends relied on data from various sources. The data that we cited primarily came from the Federal Aviation Administration’s Flight Schedule Data System, which derives from the Official Airline Guide Schedules Database. While the Inspector General did not systematically audit or validate the databases they used in their report, they conducted trend analyses and sporadic checks of the data to assess reasonableness and comprehensiveness. When their judgmental sampling identified anomalies or apparent limitations in the data, they discussed these irregularities with managers responsible for maintaining the data. Additionally, we made use of BACK Aviation Solutions, a private contractor that uses the Official Airline Guide Schedules Database and the Federal Aviation Administration Aerospace Forecasts, which is based on the Department of Transportation’s Bureau of Transportation Statistics data on passenger traffic and fleet type. We recently issued a report and assessed the reliability of BACK’s and DOT’s data. Based on (1) reviews of documentation from BACK Aviation Solutions and DOT about their data and the systems that produced them and (2) interviews with knowledgeable agency and company officials, we found the information to be sufficiently reliable for these types of analyses. On the basis of our review of the methodologies cited in the studies, together with the authors’ statements concerning steps they took to assess the reliability of the underlying data along with our previous data reliability assessments of BACK Aviation Solutions and DOT databases, we concluded that the studies’ analyses were sufficiently reliable for our purposes. We performed our work from September 2004 through October 2005 in accordance with generally accepted government auditing standards. Air service to nonhub airports has generally declined in recent years, as measured by the number of departure flights. Nonhubs have had an overall decrease in departures since July 2000. While all airports showed a decrease in service from July 2001 to July 2003 scheduled departures at small, medium, and large hub airports have increased since 2003. By July 2005, scheduled departures at small, medium, and large hub airports largely rebounded, with departures from large and small hubs exceeding the July 2000 number. However, the decline of service at nonhub airports continued, with 17 percent fewer departure flights serving these airports in July 2005 compared with July 2000. Many factors may help explain why some small communities face relatively limited air service. First, many network carriers have cut service to small communities while carriers face financial difficulties and restructure their operations. Regional carriers now operate at small communities where network carriers have withdrawn. Second, regional carriers are phasing out turboprops in favor of regional jets, which has had a negative effect on small communities that have not generated the passenger levels needed to support regional jet service. Third, the “Commuter Rule” that FAA enacted in 1997 might have also had an effect. This rule was intended to bring small commuter aircraft operated under the same safety standards as larger aircraft. This change created challenges for small communities because it is more difficult to economically operate smaller aircraft such as 19-seat turboprops under the new safety requirements. In addition, the Aviation and Transportation Security Act instituted the same security requirements for the screening of passengers for smaller airports as it did for larger airports, creating a “hassle factor” for passengers. Fourth, low cost carriers have emerged in the deregulated environment, but these airlines have generally avoided small communities, leading to the phenomenon of “leakage”—that is, passengers choosing to drive to a larger airport instead of the small community airport. According to industry consultants, low cost carriers are now looking at medium-sized markets to expand, which could result in further reduction of air service at small community airports. The financial condition of network carriers has negatively affected service to small communities, especially those served by nonhubs. We have reported that in response to the economic downturn begun in early 2001 and the events of September 11, 2001, many network carriers have been undertaking major restructuring and downsizing of their operations. A regional airline association official noted that as part of restructuring, network carriers have transferred routes to regional carriers or reduced air service to certain communities. According to an industry association, network carriers have also discontinued some service at major hubs, which can, in turn, reduce service to small communities. Flights to small communities have been cut because they are often considered to be less profitable than other routes. According to aviation consultants, turboprops have been the primary source of airline service to small communities, and in particular nonhubs, because turboprops have been the most economically viable for small communities. However, turboprop use is declining. According to one aviation consultancy, from 1995 to 2005, the number of nonstop routes served by turboprops declined 54 percent. According to the FAA Aerospace Forecast Fiscal Years 2005-2016, the trend is for further decline. By 2016, FAA expects that 10-40 seat turboprop aircraft will represent 13.3 percent of the fleet, down from 22.8 percent in 2004. According to FAA, the primary reason for the decline in turboprops has been the rise of the use of regional jets at small community airports. According to the DOT Office of the Inspector General, the number of regional jet flights at nonhubs has increased 199 percent from July 2000 to July 2005. In comparison, flights by other types of aircraft have declined— by 29 percent for large jets, 39 percent for turboprops, and 17 percent for piston aircraft. The increased use of regional jets at small communities is in line with national trends at larger airports. The FAA Aerospace Forecast Fiscal Years 2005-2016 states that jet departures by regional air carriers accounted for 65.8 percent of industry departures in 2004 compared with just 0.2 percent in 1991. According to an aviation consultant, increased use of regional jets, which tend to have 50 seats or more, makes it more difficult for small communities to fill the aircraft. Thus, according to an aviation consultant, regional jets have not been a direct substitution for turboprops on routes; rather, regional jets may fly to denser passenger markets where they can profitably operate. Another trend that might negatively affect service to small communities is that some airlines have been procuring more 70 and 90 seat aircraft. According to the FAA Aerospace Forecast Fiscal Years 2005-2016, because the larger aircraft allow for longer flight lengths, new markets may be tapped for point-to-point service that will by-pass congested hub airports. We have reported in the past that small communities may have particular difficulty attracting regional jet service because their passenger demand could not support it. In addition, an aviation consultant and industry airline association official both stated that scope clauses in labor agreements between regional and network carriers can constrain regional airlines in the aircraft size, routes, and airports served. For example, the aviation consultant said clause requirements that jets be used on certain routes have led to the retirement of turboprops even where turboprop service had been profitable. In 1997, the FAA enacted the “Commuter Rule” that called for “one level of safety” among all commercial aircraft and placed stringent safety standards on regional carriers. The intent was to bring aircraft that have 10 to 30 seats and operate scheduled service under the same safety standards as network carriers that operate with larger aircraft. The additional costs required to meet the increased safety standards made some smaller aircraft uneconomical to operate. According to industry association officials and an aviation consultant, the safety upgrades have contributed to eliminating the 19-seat plane because of the increased operating costs. According to the FAA Aerospace Forecast Fiscal Years 2005-2016, in 1998, 1 year after the implementation of the Commuter Rule, the number of city pairs serviced by the regional or commuter carriers fell to its lowest level of the decade. Although the trend reversed in 1999 as more regional jets entered the fleet, the number of short-haul markets under 200 miles continued to decline. Furthermore, between 2001 and 2004, 456 city pairs in the 0-199 mile range and 248 in the 200-499 mile range lost nonstop regional or commuter service. Taking into account city pairs that gained service, the overall result was a net loss of 184 city pairs in the 0-199 mile range and 90 in the 200-499 mile range. FAA told us that part of this decline may be due to the Commuter Rule. Small community airports are required to meet the same security standards as larger airports, which can be costly for small community airports and create a “hassle factor” for passengers. According to an aviation consultant, with the rise in increased security measures at airports, many in the traveling public have opted to drive or take trains or buses to travel in the post 9/11 era. Consumers believe that with the increased time it takes to pass through security, they would be better off using another method of transportation to go to their final destination. Low-cost carriers such as Southwest and JetBlue, provide point-to-point service in dense population markets with limited access to low fares, and in recent years this model has been relatively successful. According to the FAA Aerospace Forecast Fiscal Years 2005-2016, since 2000, network carriers have reduced their domestic capacity by 14.3 percent, while low cost carriers have increased capacity by 40.5 percent. Low-cost carriers generally avoid nonhub airports where demand for their point-to-point service is insufficient to make it economically feasible to serve with their fleets of larger aircraft. According to the Department of Transportation Office of the Inspector General, low-cost carriers scheduled service to only 5 of the more than 500 nonhub airports in July 2005, representing approximately 2 percent of the total available passenger seats at these airports. An aviation consultant stated that only the six large network carriers pay attention to small community air service. Low-cost carriers provide a challenge to small communities. Neighboring larger airports that have low cost carrier service are attracting passengers from smaller airports, a phenomenon called leakage. We have reported this as a critical factor determining a community’s demand for air service. During interviews with aviation consultants and during an industry conference, this issue was noted as one of the most significant challenges to bringing and maintaining air service at small community airports. According to aviation consultants, some low-cost carriers may begin flying from medium density airports. Such a strategy might increase the impact of leakage, as more small community passengers become closer to airports where low cost service is provided. Some potential small community airport passengers may elect to drive to airports served by a low cost carrier to access lower fares. Service at the smallest community airports might thus be further reduced. How many carriers are serving the community? How many destinations are served? What is the frequency of flights? What size aircraft service the community? Has the level of service been increasing or decreasing over the past 3 years? Have enplanements been increasing or decreasing over the past 3 years? Is the Metropolitan Statistical Area population increasing or decreasing? Is the per-capita income increasing or decreasing? Are the number of businesses in the area increasing or decreasing? What is the proximity to larger air service centers? What is the quality of road access to other air service centers? Does the community lack service in identified top Origin & Destination markets? Is the proposal designed to provide: First air service, Second carrier service, New destinations, Larger aircraft, or More frequencies? If this is an air service project, has the community selected a carrier that is willing and committed to serve? If this is an air service project, does the community have a targeted carrier that would serve? Do demographic indicators and the business environment support the project? Does the community have a demonstrated track record of implementing air service development projects? Does the project address the stated problem? Does the community have a firm plan for promoting the service? Does the community have a definitive plan for monitoring, modifying, and terminating the project if necessary? Does the community have a plan for continued support of the project if self- sufficiency or completion is not attained after the grant expires? If mainly a marketing proposal, does the community have a firm implementation plan in place? Is the applicant a participating consortium? Is the project innovative? Does the project have unique geographical traits or other considerations? (Continued From Previous Page) Is the amount of funding requested reasonable compared with the total amount of funding available? Is the local contribution reasonable compared with the amount requested? Can the project be completed during the funding period requested? Is the applicant a small hub now? Is the applicant a large nonhub now? Is the applicant a small nonhub now? Is the applicant currently subsidized through the Essential Air Service program? Is the project for marketing only? Is the project a study only? Does the project involve intermodal services? Is the project primarily a carrier incentive? Is the project primarily air fare focused? Does the project involve a low-fare service provider? Does the proposal shift costs from the local or state level to the federal level? Does the proposal show that proximity to other service would detract from it? Is the applicant close to a past grant recipient? DOT has recovered all or portions of the grant awarded to the grantee. DOT has recovered all or portions of the grant awarded to the grantee. Clarksburg/Morgantown, WV (reallocation) At the time of the grant application, Charleston was served by five major airlines that had scheduled flights to 10 destinations. The application noted that despite this level of service, there was poor service to communities in the southwestern United States, Mexico, Central and South America. The application also noted that there were large numbers of local public and private firms, as well as academic entities that needed service to the Houston metro area. In 2002, Charleston proposed that the grant would be used to obtain new regional jet service between Charleston’s Yeager Airport and Houston, Texas’ Intercontinental Airport. The application stated that this new service from Continental Airlines would have benefits for Charleston and West Virginia, including: serving a major origin and destination market for Charleston; enhancing connectivity for the region, saving consumers considerable time when connecting from points throughout the Southwestern United States; opening same-carrier service to the important industrial centers of giving West Virginia consumers an additional carrier choice; and enabling businesses to save employee time by eliminating connecting time for traveling to and from Houston. Charleston desired two weekday nonstop round trips to Houston, plus two round trips on the weekend using 37-seat or regional jets. Charleston would require Continental to offer fares reasonably consistent with those charged on a per mile basis on other routes of similar length and with the same aircraft. On June 26, 2002, Charleston was awarded a $500,000 Small Community Air Service Development Pilot grant to facilitate acquiring service to Houston. The community provided an additional $100,000 local match. Charleston allocated $500,000 as a revenue guarantee to reduce the risk of losses for Continental in the early months of the new service. The community also allocated $20,000 for expenses necessary to meet with the new carrier and to provide basic advertising and marketing support for the new service. On October 1, 2002, Continental started new nonstop service from Charleston to Houston. Initially, the service provided two flights daily, with the exception of Saturday when one daily departure was provided. In January 2004, the service was reduced to one flight daily. Airport officials told us the reduction in the number of flights was a result of aircraft fleet utilization issues at Continental. However, according to an airport official, Continental subsequently resumed the second daily flight. The community stated in their final project report to DOT that the airport experienced an increase in enplanements and a reduction in passenger leakage as a result of the Charleston to Houston service. Additionally, as shown in table 5, the airport has experienced a 31.8 percent overall increase in enplanements in October 2004 versus October 2002 when the service first started. In 2004, the airport set a record for enplanements with 291,300 and experienced a 15.6 percent increase in overall enplanements versus 2002. An airport official told us enplanement levels continue to rise as the airport continues to expand its catchment area, and that service levels at the airport are comparable to communities that are double the size of the airport. In 2002, there were 11 carriers representing five major airlines serving the airport, and in 2004 there were 12 carriers serving the market. One local official told us that the success of the new Charleston to Houston service had a secondary effect in obtaining an additional airline as well. In July 2004, Independence Air started serving the Charleston market. The official told us that the success of the Houston service, and the fact that Charleston had not experienced a drop in enplanements, showed Independence Air that Charleston could continue to handle additional service from another airline. At the time of the grant application, Delta Air Lines and Continental Connection Carrier, Gulfstream were the only two carriers serving Daytona Beach. Delta provided daily service to Atlanta, Georgia, and Saturday service to Cincinnati, Ohio. Continental Connection provided 14 weekly nonstop flights to Tampa, Florida. However, the community in its grant application told DOT that the airport could handle an increase in scheduled commercial airline service, particularly to New York. The airport stated that they had a market area of 1,383,000 people, that the community had 8.5 million visitors to the area in 2000, and that more than 325,000 of these visitors were from New York. Additionally, the grant application told DOT that the New York area provided the strongest pattern of in-migration to Volusia County/Daytona Beach, among all states, excluding Florida. Thus, the community in its grant application stated that it needed direct service to the New York area. According to the grant application, Daytona Beach used to have service to New York, but as of September 11, 2001, the service was discontinued despite having an 81 percent average load factor (percentage of occupied seats on flights) for the last 12 months of service. According to Daytona Beach’s grant application, air service had suffered in the community due to the large amount of traffic leakage to nearby airports. Daytona estimated that 50 percent to 60 percent of their leakage was to either Orlando (65 miles) or Jacksonville (90 miles). Community officials in the grant application said that this high traffic leakage was a direct result of a lack of competitive air service, inadequate seat inventory, and resulting fare differentials at Daytona Beach International Airport. At the time of the grant application, Orlando had approximately 354 daily departures and Jacksonville had 220 daily departures. Daytona at the same time had an average of 7 daily departures. According to the grant application, higher fares flying out of Daytona versus the nearby airports of Orlando and Jacksonville contributed to this leakage. Daytona Beach officials told DOT that on average, their airport’s fares were 13 percent higher to the same cities than Orlando, and 15 percent higher than Jacksonville when purchased 21 days in advance. The community noted in the grant application that weaker load factors and additional seats at Orlando and Jacksonville have led to higher fares in Daytona. In order to increase this air service, Daytona Beach stated in their grant application that it desired twice daily regional jet service to New York area’s Newark Airport. The new service provided by Continental Airlines was scheduled to begin on December 14, 2002. On June 26, 2002, Daytona/Volusia County was awarded a $743,333 Small Community Air Service Development Program grant. The local community provided an additional $165,000 for a total project cost of $908,333. The community allocated $743,333 to Continental Airlines for a revenue guarantee for the initial 12-month ramp-up period. The community’s goal was to make this service self-sufficient in the second year. Additionally, the community provided $165,000 for advertising and marketing for Continental’s new service. Components of the marketing program included: newsprint advertising, newsletter advertising, Web site promotions, media press releases, radio advertising, ribbon cutting ceremonies, and magazine advertising in both Daytona Beach and New York areas. On December 12, 2002, Continental Airlines began service between Daytona Beach and Newark Airport. Continental operated two daily trips utilizing 50 seat regional jets. The revenue guarantee between Daytona Beach and Continental for service to Newark lasted 1 year until December 11, 2003. Table 6 shows the quarterly passenger totals for this service. A local official told us that the project has been a success. The Daytona Beach to Newark service continued to operate as of September 30, 2005. In addition, following the completion of the revenue guarantee, Continental extended its agreement 2 years with the airport to provide service between Daytona Beach and Newark. The agreement expires in December 2005, but a local official expects the agreement to be renewed with Continental to continue providing this service. In addition, according to a community official, passenger traffic has risen 30 percent at the airport in the last 2 years since the grant. The airport now has service to Cleveland, Ohio, and seasonal commuter service to Tampa, Florida. Also, Delta has increased its service to 12 flights per day and has brought in larger aircraft to serve those flights. In their final report to DOT, one community official told DOT that the service expansion would not have been possible without the DOT grant. An airline official told us that the grant was successful because even with the grant completed, Daytona Beach still has service to the New York area. At the time of the grant application, Fort Smith was served by eight daily round-trips via American Eagle Airline to Dallas/Ft. Worth, Texas, and three daily round-trips by Northwest Airlink to Memphis, Tennessee. Airport officials noted in their grant application that they had inadequate service to the North and East at the time of the grant application. Furthermore, the grant application told DOT that business travelers in the region noted that excessive backtracking was a reason they did not use the Fort Smith airport for travel to markets in the North and East. According to an airport official, the airport suffers traffic leakage to other airports. Fort Smith loses passengers primarily to Tulsa, Oklahoma (118 miles), Oklahoma City, Oklahoma (183 miles), and to a lesser extent, Little Rock, Arkansas (159 miles). A local official estimated this leakage to be approximately 46,000 enplanements per year. To overcome this lack of service to the North and the East, Fort Smith proposed to obtain service to St. Louis, Missouri, or Chicago, Illinois. The community previously had service to St. Louis, but problems in the service resulted in its cancellation in 1999. The community believed that this lost service led business travelers in the area to use alternate airports to provide service to markets in the North and East. Thus, the community believed that initiation of service to St. Louis or Chicago would help answer this untapped demand. Additionally, the grant application stated that officials at Fort Smith needed to overcome other challenges to improve the airport, including: the general lack of understanding of the airline industry within the business community had created unrealistic expectations; business travelers were not fully considering the productivity losses sustained due to the use of other airports; the terrorist attacks of September 11, 2001, and the weak economy, had created uncertainty among potential travelers; and a general community perception that local air service was limited and available fares were high. On October 7, 2002, American Connection began providing three daily round trips from Fort Smith to St. Louis. The service was initially provided with Jetstream 41 turboprop aircraft. Table 7 provides quarterly enplanements for this service. American Connection posted its strongest monthly performance with 1,144 enplanements in June 2003. At the end of the third quarter in July 2003, American Airlines announced its plans to downsize its St. Louis hub. Daily departures out of St. Louis were reduced from 417 to 207 on November 1, 2003. Additionally, 26 feeder cities, including Fort Smith, lost service to St. Louis as of November 1. An airline official stated that had American not downsized St. Louis, the service from Fort Smith to St. Louis would have continued if passenger levels remained the same. According to Fort Smith’s quarterly reports to DOT, an indirect benefit that Fort Smith has seen since the grant application is that American Airlines and Northwest Airlink have transitioned from turboprop service to a regional jet service. According to airport officials, passenger loads are high and the airport continues to gain seats they lost from the termination of the St. Louis service. Additionally, as shown in table 8, the community has seen an overall increase in passenger numbers from 2002 to 2004. Fort Smith officials stated that the money spent on marketing and studies helped their cause despite losing service to St. Louis. An official told us that the studies were helpful because they showed prospective airlines that they could fly profitably from Fort Smith. The official told us that due to the flight reductions at Chicago and St. Louis, the studies are important because local officials are now looking to acquire service to Detroit, Michigan via Northwest Airlines. Airport officials told us that Detroit can serve as an alternative to Chicago and St. Louis. A local official told us that Detroit will provide Fort Smith travelers access to the northeastern part of the country as well as Europe and Japan. Local officials told us that the studies performed under the grant put the airport in a position to talk with airlines about potential service to Detroit. At the time of grant application, Hailey’s Friedman Memorial Airport had scheduled commercial air service to Seattle/Tacoma, Washington, and Salt Lake City, Utah. Seattle service was provided by de Havilland Dash 8 (37- seat) and Salt Lake service was provided by Embraer 120 (30 seat aircraft). Hailey’s application stated that it was a resort destination community with an economy dependent on tourism. It stated that Los Angeles, California, was the area’s number one market. The purpose of the grant request was to: provide air service improvements to stimulate air travel and reduce travel expense between Sun Valley and Los Angeles; stimulate local economic activity by improving air service between Sun Valley and Los Angeles; improve air access from the Sun Valley region to key destinations in the western United States; and improve air service to a rural region whose airport, Friedman Memorial Field, is significantly restricted by high altitude and mountainous terrain. The grant application told DOT that the airport’s location does not allow for certain aircraft to be able to land at Friedman Memorial Airport. Elevation of the airport (5,300 ft.) and the length of its runway (6,600 ft.) present a challenge for the airport. The high altitude and short runway restrict the types of aircraft that can utilize the airport. During winter months, flights are sometimes diverted due to low visibility conditions. During the summer, flights are weight-restricted due to the higher density altitude caused by warmer temperatures. A community official told us that this difficult operating environment is a factor hampering air service. Additionally, the grant application told DOT that the airport experiences leakage. Other airports used by potential Friedman passengers include Boise (154 miles), Magic Valley/Twin Falls (64 miles), Pocatello (150 miles), and Idaho Falls Regional Airport, Idaho (140 miles). Additionally, a local official told us that the expense of flying into Hailey is also a challenge. In order to increase its air service, the community proposed new service to Los Angeles. Horizon Airlines would provide daily round trip service from Friedman Memorial Airport in Hailey to Los Angeles on 70-seat turbo- props. The June 26, 2002, grant agreement provided the City of Hailey $600,000. The community provided a local match of $271,743. The community allocated $644,344 of their money to cover a revenue shortfall for Horizon Airlines for a 12 month ramp up period. The community estimated that it would take up to 12 months for passenger projections to reach full maturity. An airport official told us that the grant allowed the airline to overcome the initial risk of operating a new route by providing a subsidy for the first year. Additionally, Hailey allocated $175,000 for marketing, including direct sales, direct mail, print advertising, internet marketing, and radio advertising. Marketing would be targeted to people living in the Los Angeles area that may be interested in visiting nearby Sun Valley and residents in the Sun Valley area that may be interested in travel to Los Angeles for business or personal reasons. On December 15, 2002, Horizon Air commenced scheduled service from Hailey to Los Angeles via Horizon Air with one daily round trip until December 17, 2003. In the community’s final project report to DOT, it told DOT that the recreational nature of Hailey and the nearby Sun Valley market generated more traffic in the first and third quarters, versus the second and fourth quarters. The two higher seasons where more traffic occurred were in the winter and the summer months, which are peak tourist seasons in the area. In their final report to DOT, the community told DOT that Hailey’s projections for the first year had been 27,366 origin and destination passengers, which would lead to a 53.6 percent load factor. As shown in table 9, their actual passenger totals were 19,335 passengers and a 41.5 percent load factor. A local official in the final project report told DOT that the 70-seat de Havilland Dash 8-400Q is a large aircraft for the market, thus resulting in lower load factors. The official told DOT that the flight Horizon Airlines provides would be best served by a 50-seat aircraft. According to Hailey officials, there are no 50 seat regional jets that have the capability to serve the market, given the airport’s current limitations. In 2004, upon completion of the Small Community Air Service Development Program grant, Horizon Airlines stopped providing year round service to Hailey. Instead, the community contracted with Horizon to provide seasonal service between Hailey and Los Angeles. Additionally, with the grant completed, a local Hailey company provided Horizon Airlines a revenue guarantee to continue to fly the service into Hailey. The company official told us that the grant provided the company justification to promote air service in the community. The official’s goal is to make the service between Los Angeles and Hailey self-sufficient in 5 years so a revenue guarantee is no longer needed. In addition, a local official told us that the grant helped start new air service provided by Horizon Airlines between Oakland, California, and Hailey. A local official told us that the grant has reduced passenger leakage to Boise and Twin Falls, Idaho. However, a local official told us that one problem that the community still encounters is that flights are diverted to Twin Falls due to weather. An airport official told us that if a new instrument landing system were introduced, up to 30 percent of the flights that are now diverted could land in Hailey. Currently, under Hailey’s agreement with Horizon Airlines, the community pays for the costs of busing passengers from Twin Falls to Hailey when planes are diverted due to weather. An airline official told us that the grant definitely succeeded and met their expectations for being able to provide service between Hailey and Los Angeles for part of the year. At the time of the grant application, Lynchburg had service to Atlanta, Georgia; Charlotte, North Carolina; and Philadelphia and Pittsburgh, Pennsylvania. The Atlanta service was provided by Atlantic Southeast Airlines/Delta Connection, and the Charlotte, Philadelphia, and Pittsburgh service was provided by US Airways/Air Midwest/Shuttle America. According to the April 19, 2002, grant application, Lynchburg had recently lost service from United Express/Atlantic Coast to Washington’s Dulles Airport. Furthermore, the community had experienced a recent overall decline in service at the time of the grant application. From April 1999 to April 2002 the community had lost 580 weekly departing seats and 23 weekly departing flights. According to the grant application, to fill its air service deficiency and recapture lost traffic, Lynchburg proposed an upgrade to small jet service from turboprop for service to Atlanta and Pittsburgh. Additionally, the community wanted an upgrade to a larger turboprop for service to Charlotte. According to the grant application, the objectives of the application were to: Establish additional service that will meet the needs of the region. Capture passengers from the service area that use other airports due to insufficient services. Build additional ridership at the airport as a result of offering service options that are competitive with those found at communities of comparable size. Strengthen the economic base of the region. Enhance levels of air service in Lynchburg. Lynchburg noted in its grant application that it had higher airfares relative to other nearby airports in the region such as Newport News, Roanoke, and Charlottesville, Virginia. For example, in a study the community found that fares between Lynchburg and Los Angeles are 19.7 percent greater than from Roanoke (55 miles), 227.8 percent greater than Newport News (213 miles), and 23.9 percent greater from Charlottesville (66 miles) based on 3- day advance purchase business fares. Overall, in the community’s grant application, only one market (Chicago O’Hare) in the five sample locations provided had a community that exceeded fares offered at Lynchburg. In addition, the grant application stated that the airport suffered a great deal of passenger leakage to nearby airports. In the application, the community noted that a recent study concluded that 38.4 percent of the traffic generated by the population residing within Lynchburg’s catchment area travel to other airports was due to lower fares and wider availability of air service. It was estimated that 9 percent of the traffic was leaking to Roanoke (55 miles) and 13 percent to Raleigh/Durham, North Carolina (180 miles), to utilize low fare air service. Six other nearby airports also accounted for approximately 17 percent leakage out of the community, according to the application. The community told DOT in its application that some of this leakage could not be recaptured due to low fare service at Raleigh/Durham. However, the community also told DOT that much of the lost traffic was due to consumer preference for larger and more comfortable aircraft. The June 26, 2002, grant agreement provided Lynchburg $500,000, while the local community provided $100,000 in matching funds for a total of $600,000. Lynchburg allocated $475,000 of the program for a 12-month revenue guarantee for Delta upgrading to small jet aircraft (32 seats or greater). The remaining $125,000 was used for advertising and marketing for the airport’s newly upgraded service. This sum included payments for consulting services to negotiate with the target carrier and marketing efforts after the recruitment to the benefit of both the new carrier and incumbents as well. Lynchburg Airport and Delta negotiated a revenue guarantee to upgrade their Lynchburg to Atlanta service from 30-seat turboprops to 40-seat regional jets beginning on May 4, 2003. The service provides three roundtrips a day between Lynchburg and Atlanta, and helped increase Delta’s passenger capacity in this market by 25 percent. Additionally, on May 2, 2004, US Airways upgraded its Lynchburg to Charlotte service from 19 seat turboprops to 37-seat Dash-8 turboprops. This upgrade in service was provided without a revenue guarantee from Lynchburg. In a quarterly progress report to DOT, an airport official told DOT that US Airways had upgraded its service partly due to the success of the new Delta jet service. The Charlotte service provides the airport six daily departures. In total, upgraded US Airways and Delta flights provided Lynchburg with nine daily departures and 342 passenger seats. Lynchburg has, however, lost air service from US Airways to Pittsburgh and Philadelphia since the 2002 grant application. An airport official told us that the service was lost due to the economic problems facing major airlines, a general unwillingness for people to fly after September 11, and US Airways reducing its operations in Philadelphia. Despite this loss in service, Lynchburg’s enplanements have risen since 2002. (See table 10.) Additionally, total passenger traffic has increased from 100,274 in 2002 to 120,174 in 2004. The airport in their final project report to the DOT credits this increase in traffic to the upgrade in jet service, lowering of fares at the airport, and increased service at the airport. An airport official told us that the program was a success because it resulted in an additional three sustainable jet flights daily. Additionally, Delta Air Lines on April 5, 2004, deemed the upgraded jet service a success and agreed to continue providing the service without a revenue guarantee after the Small Community Air Service Development Program revenue guarantee ended in May 2004. Furthermore, the community’s final report to DOT noted that the airport has seen an increase in enplanements and a decrease in leakage. The community told DOT that this has occurred due to the upgrade in jet service and a lowering of fares at the airport. The community still has the same amount of weekly departures as before the grant, but the upgrade in jet service has led to more available passenger seats for the community than in January 2002. Despite this increase in passenger seats, airlines at Lynchburg airlines’ load factors have risen since the 2002 grant application. At the time of the grant application, Mobile was served by Delta Air Lines, US Airways Express, Continental Express, and Northwest Airlink. These four airlines provided Mobile service to Atlanta, Georgia; Dallas/Fort Worth, Texas; Charlotte, North Carolina; Houston, Texas; and Memphis, Tennessee. In previous years, however, Mobile had experienced a decline in air service. Between 1996 and 2002, six airlines cancelled service on seven routes. According to the grant application, since September 2001, the community had lost service to Chicago, Illinois; Cincinnati, Ohio; Birmingham, Alabama; and Washington, D.C. Furthermore, since July 2001, the community has gone from 28 daily departures to 20, and has declined from 10 nonstop cities to 5. According to the grant application, fares had been a long-standing problem for Mobile. Mobile stated that it had paid up to 40 percent higher average fares than counterparts since 1995. These higher fares had led Mobile passengers to drive to nearby airports such as Pensacola, Florida (60 miles), Gulfport, Mississippi (70 miles), and New Orleans, Louisiana (150 miles) to access lower fares or direct service. To obtain additional service, Mobile proposed to develop an airport-airline business model to enable more profitable air service at the airport. Under the model, Mobile Airport Authority would own and operate the airline ground stations, charging the airline on a per turn (one arrival and subsequent departure) basis for its use of equipment and staff. The airline station staff would be airport employees, and the airport would provide all the equipment required to handle ground operations. An airport official told us that the community believed that this initiative would help airlines with their high start up costs in a market. If several airlines serve the airport, the program can reduce cost and inefficiency by not having to duplicate staff, equipment, and operations. In addition to developing the airport-airline business model, the goals of the grant according to the grant application were to: recruit new service from US Airways Express; additional frequencies to Charlotte and new service to selected US Airways cities; and recruit nonstop service to target cities of New York, Orlando, Chicago, and Birmingham. At the time of the application, the Mobile Airport Authority had already established the new airport-airline program for US Airways. Responding to an announcement that US Airways would completely withdraw from Mobile after September 11, 2001, the Airport Authority hired 10 former station employees and took over handling ground operations for US Airways. In turn, US Airways maintained one local employee and kept open some service. The goal of the program was to use the business model to prevent other airlines from pulling out of the market or to recruit carriers into the market. The June 26, 2002, grant agreement provided Mobile $456,137 for the airport-airline business model, and the city of Mobile contributed $20,000 toward the project for a total of $476,137. The grant allowed Mobile to allocate $144,645 to purchase appropriate ground handling and office equipment to continue to operate the existing station. The equipment they were utilizing at the time for US Airways was on loan from a previous tenant. In addition, $311,492 of the program was allocated as funding for direct operating expenses for personnel, supplies, and maintenance for the existing station for 1 year of operation. The remaining $20,000 was allocated toward marketing support for any new service that participated in the new airport-airline program. Mobile successfully retained US Airways service to Charlotte. An airline official told us before the grant that the Mobile to Charlotte service was not performing as well as expected, and that the airline was planning to leave the market. The airline official told us that much of the problem was due to US Airways staff not being used efficiently. This was due to US Airways having a limited number of flights, which led to high ground station costs per flight. The airline official told us that the Small Community Air Service Development Program grant for US Airways was enough of a cost savings to keep them in the market. Currently, there are eight Airport Authority staff allocated to the program. The staff is put through a training program sponsored and paid for by US Airways, with the exception of lodging and food which is paid for by the airport. One airport official told us that they were not sure how much they were saving US Airways, but US Airways continues to provide Mobile air service. After the training takes place, the staffing initiative is administered and funded by the airport. There are no local taxes or funding supporting the program. Additionally, Mobile officials told us that station cost program was successful in securing new service from American Airlines. On April 11, 2005, Mobile announced that American Airlines would operate two daily round-trip flights between Mobile and Dallas/Ft. Worth, Texas, beginning June 9, 2005, using 44-seat Embraer ERJ-140 jets. An airport official told us that Mobile’s station cost program was the reason for American’s decision. The airport official convinced American that Mobile was prepared to take over ground station costs until the airline made a profit with its new service. US Airways and American Airlines are the only airlines in Mobile to utilize the airport’s station cost offer so far. Airport officials told us that they have offered the ground station program to other air carriers serving Mobile, but none of the carriers expressed interest in the program. An airport official told us that the program would not work as well for incumbent airlines because ground staff would likely lose their jobs. If other carriers chose to participate, the Authority would probably not need to hire all airline staff. The authority would economize the operations with the staff that they already have employed and increase staff as needed. However, an airport official told us that it would work well for airlines like US Airways that are planning to pull out of the market, and for smaller carriers coming into the market where the start-up costs are prohibitive. At the time of the grant application, Reading was served by US Airways with two daily flights to Philadelphia, Pennsylvania, and four daily flights to Pittsburgh, Pennsylvania. The community noted that lack of other air service and the fares at the airport caused 91 percent of Reading’s ticketed passengers to leak to nearby airports. Additionally, at the time of the grant application, the airport enplanement numbers were half of the volume generated in 1989. In the community’s grant application, Reading indicated that they have attempted to have talks with US Airways regarding service improvements and with additional carriers about providing service to Reading. The community told DOT that they had discussions with US Airways to return service to pre-September 11 levels. Additionally, they had discussions with Delta Air Lines for new service to Atlanta, Georgia, or Cincinnati, Ohio; Air Tran for service to Atlanta, Georgia, and Florida; and Northwest Airlink for service to Detroit, Michigan. Reading’s 2002 application desired to (1) implement a marketing campaign to raise awareness of flying from Reading, (2) retain a marketing and air service consultant to develop and manage the airport’s local advertising campaign, and (3) develop the Reading Connection to provide regularly scheduled bus service to Philadelphia to demonstrate the demand for air service that has been reduced. The June 26, 2002, grant agreement provided Reading $470,000 for the total project and Reading added a local match of $30,000. Reading allocated $300,000 to subsidize the Reading Connection bus service, $50,000 towards general airport advertising, $50,000 for consultant services, and $70,000 toward advertising and promotion of new carrier services at the airport. The Reading Connection was a bus service between Reading and Philadelphia that was intended to demonstrate demand to airlines that there was a need for increased air service at Reading. General airport advertising included radio promotions, print advertising, press releases, direct mail pieces, email newsletters, and website development. The consultant services were used to retain a marketing and air service development consultant to manage the airport’s local advertising, public relations, and community outreach programs. The advertising and promotion component would be used to aggressively market a new carrier’s entrance into the Reading market. Elements of the program included: billboards, radio, print, direct mail, and community receptions. Reading Airport lost all commercial air service as of September 2004. The community lost service to Philadelphia and Pittsburgh via US Airways and was unable to recruit new additional service. A local official told us that US Airways stopped serving Reading because they felt the bus service would be in direct competition with the airline. Additionally, the official told us that Reading’s proximity to nearby airports in Philadelphia, Allentown, and Harrisburg, Pennsylvania, made Reading a low priority for air service in Pennsylvania. According to a local official, the Reading Connection’s bus service operated until the subsidy provided by Small Community Air Service Development Program was completed. After the grant, the service could not be sustained on its own, and the service ended. However, a local entrepreneur has since started the service again without subsidy and provides five round trips daily between Reading and Philadelphia. According to a local official, although the grant did not work the first time, the name recognition that the original grant provided has led to the demand for the bus service now. At the time of the grant application, Scottsbluff was served by Great Lakes Airlines with three daily round trip flights to Denver, Colorado. The community told DOT that for travelers that travel to Lincoln or Omaha, Nebraska the connections and fares were poor through Denver. A local official told us that the 450 miles from Scottsbluff to Omaha could be driven faster than flying to Denver and waiting several hours for a connecting flight to Omaha. Additionally, a local official told us that people in western Nebraska feel separated from the rest of the state. In the grant application, the community noted that the lack of intrastate service hinders government entities, businesses, educational institutions, and individuals traveling for personal reasons. Thus, Scottsbluff in its 2002 grant application proposed to support the development of an intra-state air service, provided by Westward Airways, linking eastern and western Nebraska. Scottsbluff previously had similar intra-state service, but operations ceased in November 1995 when the carrier declared bankruptcy. This previous service had been provided under the Essential Air Service program. An airport official told us that there is no direct competition for the Westward Airways intra-state service. The June 26, 2002, grant agreement provided Scottsbluff $950,000 for the project, and the local community provided $750,000 in funding for a total of $1,700,000. Westward Airways in conjunction with Scottsbluff provided the intra-state service. The grant allocated $867,893 to be used to fund pre- operating expenses. These expenses included all the costs the company anticipated during the 6 month pre-operating period. Examples of these expenses include administrative and flight operations personnel wages and benefits, personnel training, professional fees, facility rent and insurance, and aircraft acquisition. The remaining $832,107 was allocated to fulfill the company’s working capital requirement. Working capital requirements included funds for cash flow operations and forecasted growth phases. Westward Airways commenced their Nebraska intra-state service in June 2004 and ceased operations in July 2005. The service consisted of two daily weekday roundtrips that stop in Scottsbluff, North Platte, Lincoln, and Omaha, Nebraska. All Westward Airways flights in Nebraska were conducted on the Pilatus PC-12 aircraft, a pressurized aircraft capable of 300 miles per hour cruising speeds at altitudes up to 30,000 feet. As shown in table 11, Scottsbluff service had 234 passengers in April 2005. Westward Airways intra-state service added 10 weekly flights from Scottsbluff, increasing the airport’s weekly departures from 18 to 28. The community in its final report to the DOT stated that the program increased enplanements and reduced passenger leakage at the airport. However, the final project report said that initial passenger enplanements were not as robust as expected. It noted that the market had taken longer to develop because travelers are extremely price sensitive. In July 2005 Westward Airways had financial difficulties and ceased operations. At the time of the grant application, Somerset did not have commercial air service. Passengers in the region travel to Lexington, Kentucky (80 miles), Louisville, Kentucky (130 miles), and Cincinnati, Ohio (150 miles) to utilize commercial air service. According to the grant application, because Somerset is not located on the interstate highway system, access to these nearby commercial airports is more difficult. The community told DOT in the grant application that the lack of commercial air service in the region limits the community’s ability to attract additional industry and recreational travelers. In the grant application, Somerset noted that the nearest airport at Lexington offered only a modest amount of nonstop service at a relatively high average fare. Thus, the community noted that an air traveler wanting to go to or from the Somerset region was faced with the alternative of driving a considerable distance and paying high prices for air travel. The community noted that these factors tended to constrain air travel demand and the economic development of the Somerset region. As a result, Somerset in association with the counties of Casey, McCreary, Pulaski, Russell, and Wayne proposed to conduct a feasibility study to determine the potential for commercial air service for the Somerset-Pulaski County Airport. If feasible, the study would also identify a mechanism to implement an appropriate level of service. The objectives of the application included: identifying the level of demand under different operating scenarios- operators, equipment, frequencies, destinations, and fares; preparing materials for presentation to potential carriers; and contacting potential carriers to determine implementation needs. The grant provided Somerset with $95,000 and the community provided a local contribution of $18,000. The grant was used to complete a feasibility study for commercial air services in the region and also provided the community with funds to solicit potential airlines. Specifically the study goals were to look at (1) potential travel demand for the airport, (2) development of proposed operating scenarios, (3) economics of operating scenarios, (4) identification of potential operators, and (5) development of Somerset-Pulaski County air service marketing plan. According to the grant application, the potential demand projections would allow Somerset to estimate demand if air service was available to the region. The development of proposed operating scenarios would help determine possible service options, scheduling, and selection of appropriate aircraft. The economics of operating scenarios would determine potential operating scenarios of location and aircraft and rank them accordingly based on their economic potential. Identification of potential operators would place emphasis on air carriers with the appropriate equipment to serve the Somerset market. Lastly, a marketing plan would be developed to include identifying future budgetary needs. Somerset developed an air service development plan study to document the air service needs of the community. A local official told us that the community learned from the development plan that they can support new air service. The community is currently attempting to attract commuter air service to help with tourism, to attain more industry, and for better jobs. According to the local official the air service development plan has led to initial talks with airlines with regard to providing service to Somerset. Community officials told us that they predict people using the airport would be interested in saving time and money by flying out of Somerset. The community’s feasibility study found that 30 percent of businesses in the Somerset area stated that good air transportation access is important or very important for business expansion. For recreation, one local official told us that the community attracts six to seven million tourists per year, and that the number could increase if commercial air service were provided. Community officials told us that they believe that given the drive time and costs, such as gas and parking fees at other airports, passengers will utilize Somerset’s airport. However, one local official told us that to see the new service succeed the community must support it and market it extensively. For example, this official suggested that local businesses could tell their employees to fly the routes served by Somerset to keep the load factors high. Furthermore, community leaders told us that the study has had indirect benefits as well. The study has spurred spin-off improvements at the airport and community, including new lights at the airport, a new Instrument Landing System and a new inter-modal transportation park. Additionally, the community is in the process of building a new $2 million terminal at the airport, and are adding $1.5 million in airport infrastructure. Taos had scheduled commercial air service at the time of the grant application via Rio Grande Air to Albuquerque, New Mexico. The service, provided on 9-seat Cessna Caravans, began in August 1999 with scheduled service between Taos, Los Alamos, and Albuquerque, New Mexico. In January 2000, the state helped supplement this service when they awarded a grant of $100,000 which was matched by the Town of Taos, the Village of Taos Ski Valley, and the county of Los Alamos. In October 2001, the state awarded a grant of $190,000 to help fund service between Taos, Ruidoso, and Albuquerque, New Mexico. Taos provided $25,000 in matching funds, the Village of Taos ski valley provided $25,000, and Ruidoso provided $150,000. In 2002, Taos and Ruidoso jointly applied for a Small Community Air Service Development Program grant. The primary objective of the grant was to fund Rio Grande’s service to Albuquerque. Ruidoso eventually decided to withdraw from the grant due to their desire to obtain service to El Paso, Texas. According to an airport official, the elevation of the Taos airport (7,091 ft.) and the length of the runway (5,800 ft.) pose landing problems for aircraft: the runway is too short and narrow to land many types of airplanes. He told us that if the runway situation improved they would try to get larger aircraft to serve Taos. At the time of the grant application, the community noted that there is a reluctance of some travelers to fly on small aircraft that serve Taos. Along with reluctance to fly small aircraft, the application noted that capturing local passengers that drive to Albuquerque is a problem. The community noted in its grant application that many travelers and travel agents in other markets were not aware of Rio Grande Air. Additionally, the community described the air service at the time of the grant application provided by Rio Grande as fragile due to its relative newness. The goals of the grant application were to: fortify Taos’ air service, expand advertising and promotion to solidify support for the service, create a self sustaining air service for Taos’ mountain resort provide a link to new air service through ground transportation connections and other communities of the Taos/Enchanted Circle region. The application sought funds to continue service by Rio Grande Air to Albuquerque at the time of the grant. At the time of the grant application, the service was only 2 years old and the community considered it fragile. The June 26, 2002, grant agreement provided Taos with $500,000. The Town of Taos, Taos Ski Valley Incorporated, and Taos Aviation Services provided $200,000. The State of New Mexico provided another $200,000 in state funding for the project, bringing the overall project total to $900,000. The application allocated $634,423 of the program’s cost to cover a revenue guarantee for Rio Grande Airways during the initial phases of service. In addition, the application allocated the Town of Taos $265,577 for advertising and promotion of the continuing service. The advertising and promotion component includes billboards, newspapers, magazines, television, and radio advertisements. The advertising and promotion program was used to target the drive market visitor, business travelers, and in-state tourists. Rio Grande continued to provide service to Albuquerque until June 2004. At that time, the service was discontinued because the airline went bankrupt. An airline official from Rio Grande Airline told us that the support from the community had not sustained after the Small Community Air Service Development Program funding was completed. He also told us that there were many setbacks that the grant could not control, such as a tremendous drought in the region leading to a weak ski season, a major forest fire that caused a drop in enplanements and a drop in the overall economy after September 11. Additionally, the Rio Grande official told us that the airline needed more planes to improve their economies of scale to support itself. The official also told us that an airline cannot succeed if all the overhead costs have to be applied to just two aircraft, since the aircraft become too expensive to operate. However, the Rio Grande official told us that the service, when operating, helped build enplanements and a steady growth in passengers for Taos. An airport official told us that the project was a success because the community had a taste of air service and that there is now a demand for service from Taos to Albuquerque. Table 12 shows the passenger traffic for Rio Grande Airways from the 2002 grant application year through May 2004. In 2003, Taos and a consortium of New Mexico communities received another Small Community Air Service Development grant. The grant provided intrastate service for Gallup, Taos, and Las Cruces, New Mexico. The new service began in December 2004 and was provided by Westward Airways. However the service was discontinued in July 2005 when Westward Airways filed for bankruptcy. In addition to the individual named above, other key contributors to this report were Glen Trochelman, Assistant Director and Robert Ciszewski, Catherine Hurley, Stuart Kaufman, Alexander Lawrence, Bonnie Pignatiello Leer, Maureen Luna-Long, and Nitin Rao.
Over the last decade significant changes have occurred in the airline industry. Many legacy carriers are facing challenging financial conditions and low cost carriers are attracting passengers away from some small community airports. These changes, and others, have challenged small communities to attract adequate commercial air service. To help small communities improve air service, Congress established the Small Community Air Service Development Program in 2000. This study reports on (1) how the Department of Transportation (DOT) has implemented the program; and (2) what goals and strategies have been used and what results have been obtained by the grants provided under the program. The Small Community Air Service Development Program grants are awarded at the discretion of the Secretary of Transportation. GAO found that DOT considered the statutory eligibility criteria and priority factors as well as other factors in evaluating proposals and in making awards. The number of grant applications has declined since 2002. DOT officials see this as a consequence of the large number of ongoing grants and the impact of 2003 legislative changes. In surveying airport directors we found that grantee airports generally responded positively to DOT's process for awarding grants, about two-thirds were satisfied with the clarity of the selection criteria, while about one-third of directors at airports not receiving grants were satisfied with the clarity. DOT oversight is based on reviews of grantee reports and reimbursement requests, and DOT has terminated some projects and reallocated the unexpended funds to others. Individual grant projects had goals including adding flights, airlines and destinations, lowering fares, obtaining better planning data, increasing enplanements, and curbing the loss of passengers to other airports. Grantees used a number of strategies to achieve their goals, including subsidies and revenue guarantees to the airlines, marketing to the public and to the airlines, hiring personnel and consultants, and establishing travel banks. Results for the 23 projects completed by September 30, 2005 were mixed: about half of the airports reported air service improvements that were self-sustaining after the grant was over. Some projects were not successful due to factors beyond the project, such as an airline decision to reduce flights at a hub. However, it is too soon to assess the overall effectiveness of the program, because most funded projects are not complete--127 of the 157 awarded grants are ongoing. DOT designates one airport each year as an Air Service Development Zone. The communities selected in 2002, 2003, and 2004 expressed similar concerns about the usefulness of this designation. None of the communities could cite any effect the Air Service Development Zone had for them. Instead, communities expressed confusion as to what DOT's designation was supposed to provide.
The U.S. export control system for items with military applications is divided into two regimes. State licenses munitions items, which are designed, developed, configured, adapted, or modified for military applications, and Commerce licenses most dual-use items, which are items that have both commercial and military applications. Although the Commerce licensing system is the primary vehicle to control dual-use items, some dual-use items—those of such military sensitivity that stronger control is merited—are controlled under the State system. Commercial communications satellites are intended to facilitate civil communication functions through various media, such as voice, data, and video, but they often carry military data as well. In contrast, military communications satellites are used exclusively to transfer information related to national security and have one or more of nine characteristics that allow the satellites to be used for such purposes as providing real-time battlefield data and relaying intelligence data for specific military needs. In addition, the technologies used to integrate a satellite to its launch vehicle are similar to those used to integrate ballistic missiles to their launch vehicles. In March 1996, the executive branch announced a change in licensing jurisdiction transferring two items—commercial jet engine hot section technologies and commercial communications satellites—from State to Commerce. In October and November 1996, Commerce and State published regulations implementing this change, with Commerce defining enhanced export controls to apply when licensing these two items. State and Commerce’s export control systems are based on fundamentally different premises. The Arms Export Control Act gives the State Department the authority to use export controls to further national security and foreign policy interests, without regard to economic or commercial interests. In contrast, the Commerce Department, as the overseer of the system created by the Export Administration Act, is charged with weighing U.S. economic and trade interests along with national security and foreign policy interests. Differences in the underlying purposes of the control system are manifested in the systems’ structure. Key differences reflect who participates in licensing decisions, scope of controls, time frame for the decision, coverage by sanctions, and requirements for congressional notification. Participants. Commerce’s process involves five agencies—the Departments of Commerce, State, Defense, Energy, and the Arms Control and Disarmament Agency. Other agencies can be asked to review specific license applications. For most items, Commerce approves the license if there is no disagreement from reviewing agencies. When there is a disagreement, the chair of an interagency group known as the Operating Committee, a Commerce official, makes the initial decision after receiving input from the reviewing agencies. This decision can be appealed to the Advisory Committee on Export Policy, a sub-cabinet level group comprised of officials from the same five agencies, and from there to the cabinet-level Export Administration Review Board, and then to the President. In contrast, the State system commonly involves only Defense and State. While no formal multi-level review process exists, Defense officials stated that license applications for commercial communications satellites are frequently referred to other agencies, such as the Arms Control and Disarmament Agency, the National Security Agency, and the Defense Intelligence Agency. Day-to-day licensing decisions are made by the Office of Defense Trade Controls, but disagreements could be discussed through organizational levels up to the Secretary of State. This difference in who makes licensing decisions underscores the weight the two systems assign to economic and commercial interests relative to national security concerns. Commerce, as the advocate for commercial interests, is the focal point for the process and makes the initial determination. Under State’s system, Commerce is not involved, underscoring the primacy of national security and foreign policy concerns. Scope of controls. The two systems also differ in the scope of controls. Commerce controls items to specific destinations for specific reasons. Some items are subject to controls targeted to former communist countries while others are controlled to prevent them from reaching countries for reasons that include antiterrorism, regional stability, and nonproliferation. In contrast, munitions items are controlled to all destinations, and State has broad authority to deny a license; it can deny a request simply with the explanation that it is against U.S. national security or foreign policy interests. Time frames. Commerce’s system is more transparent to the license applicant than State’s system. Time frames are clearly established, the review process is more predictable, and more information is shared with the exporter on the reasons for denials or conditions on the license. Sanctions. The applicability of sanctions may also differ under the two export control systems. Commercial communications satellites are subject to two important types of sanctions: (1) Missile Technology Control Regime and (2) Tiananmen Square sanctions. Under Missile Technology sanctions, both State and Commerce are required to deny the export of identified, missile-related goods and technologies. Communications satellites are not so-identified but contain components that are identified as missile-related. When the United States imposed Missile Technology sanctions on China in 1993, exports of communications satellites controlled by State were not approved while exports of satellites controlled by Commerce were permitted. Under Tiananmen Square sanctions, satellites licensed by State and Commerce have identical treatment. These sanctions prohibit the export of satellites for launch from launch vehicles owned by China. However, the President can waive this prohibition if such a waiver is in the national interest. Congressional notification. Exports under State’s system that exceed certain dollar thresholds (including all satellites) require notification to the Congress. Licenses for Commerce-controlled items are not subject to congressional notification, with the exception of items controlled for antiterrorism. However, the Congress is notified of presidential waivers of the Tiananmen Square sanctions under both the State and Commerce systems. Export control of commercial communications satellites has been a matter of contention over the years among U.S. satellite manufacturers and the agencies involved in their export licensing jurisdiction—the Departments of Commerce, Defense, State, and the intelligence community. To put their views in context, I would now like to provide a brief chronology of key events in the transfer of commercial communications satellites to the Commerce Control List. As the demand for satellite launch capabilities grew, U. S. satellite manufacturers looked abroad to supplement domestic facilities. In 1988, President Reagan decided to allow China to launch U.S.-origin commercial satellites. The United States and China signed an agreement in January 1989 under which China agreed to charge prices for commercial launch services similar to those charged by other competitors for launch services and to launch nine U.S.-built satellites through 1994. Following the June 1989 crackdown by the Chinese government on peaceful political demonstrations on Tiananmen Square in Beijing, President Bush imposed export sanctions on China. President Bush subsequently waived these sanctions for the export of three U.S.-origin satellites for launch from China. In February 1990, the Congress passed the Tiananmen Square sanctions law (P.L. 101-246) to suspend certain programs and activities relating to the Peoples Republic of China. This law also suspends the export of U.S. satellites for launch from Chinese-owned vehicles. In November 1990, the President ordered the removal of dual-use items from State’s munitions list unless significant U.S. national security interests would be jeopardized. This action was designed to bring U.S. controls in line with the industrial (dual-use) list maintained by the Coordinating Committee for Multilateral Export Controls, a multilateral export control arrangement. Commercial communications satellites were contained on the industrial list. Pursuant to this order, State led an interagency review, including officials from Defense, Commerce, and other agencies to determine which dual-use items should be removed from State’s munitions list and transferred to Commerce’s jurisdiction. The review was conducted between December 1990 and April 1992. As part of this review, a working group identified and established performance parameters for the militarily-sensitive characteristics of communications satellites. During the review period, industry groups supported moving commercial communications satellites, ground stations, and associated technical data to the Commerce Control List. In October 1992, State issued regulations transferring jurisdiction of some commercial communications satellites to Commerce. These regulations also defined what satellites remained under its control by listing nine militarily sensitive characteristics that, if included in a commercial communication satellite, warranted their control on State’s munitions list. (These characteristics are discussed in app. 1.) The regulations noted that parts, components, accessories, attachments, and associated equipment (including ground support equipment) remained on the munitions list, but could be included on a Commerce license application if the equipment was needed for a specific launch of a commercial communications satellite controlled by Commerce. After the transfer, Commerce noted that this limited transfer only partially fulfilled the President’s 1990 directive. Export controls over commercial communications satellites were again taken up in September 1993. The Trade Promotion Coordinating Committee, an interagency body composed of representatives from most government agencies, issued a report in which it committed the administration to review dual-use items on the munitions list, such as commercial communications satellites, to expedite moving them to the Commerce Control List. Industry continued to support the move of commercial communications satellites, ground stations, and associated technical data from State to Commerce control. In April 1995, the Chairman of the President’s Export Council met with the Secretary of State to discuss issues related to the jurisdiction of commercial communications satellites and the impact of sanctions that affected the export to and launch of satellites from China. Also in April 1995, State formed the Comsat Technical Working Group to examine export controls over commercial communications satellites and to recommend whether the militarily sensitive characteristics of satellites could be more narrowly defined consistent with national security and intelligence interests. This interagency group included representatives from State, Defense, the National Security Agency, Commerce, the National Aeronautics and Space Agency, and the intelligence community. The interagency group reported its findings in October 1995. Consistent with the findings of the Comsat Technical Working Group and with the input from industry through the Defense Trade Advisory Group, the Secretary of State denied the transfer of commercial communications satellites to Commerce in October 1995 and approved a plan to narrow, but not eliminate, State’s jurisdiction over these satellites. Unhappy with State’s decision to retain jurisdiction of commercial communications satellites, Commerce appealed it to the National Security Council and the President. In March 1996, the President, after additional interagency meetings on this issue, announced the transfer of export control authority for all commercial communications satellites from State to Commerce. A key part of these discussions was the issuance of an executive order in December 1995 that modified Commerce’s procedures for processing licenses. This executive order required Commerce to refer all licenses to State, Defense, Energy, and the Arms Control and Disarmament Agency. This change addressed a key shortcoming that we had reported on in several prior reviews. In response to the concerns of Defense and State officials about this transfer, Commerce agreed to add additional controls to exports of satellites designed to mirror the stronger controls already applied to items on State’s munitions list. Changes included the establishment of a new control, the significant item control, for the export of sensitive satellites to all destinations. The policy objective of this control—consistency with U.S. national security and foreign policy interests—is broadly stated. The functioning of the Operating Committee, the interagency group that makes the initial licensing determination, was also modified. This change required that the licensing decision for these satellites be made by majority vote of the five agencies, rather than by the chair of the Committee. Satellites were also exempted from other provisions governing the licensing of most items on the Commerce Control List. In October and November 1996, Commerce and State published changes to their respective regulations, formally transferring licensing jurisdiction for commercial communications satellites with militarily sensitive characteristics from State to Commerce. Additional procedural changes were implemented through an executive order and a presidential decision directive issued in October 1996. According to Commerce officials, the President’s March 1996 decision reflected Commerce’s long-held position that all commercial communications satellites should be under its jurisdiction. Commerce argued that these satellites are intended for commercial end use and are therefore not munitions. Commerce maintained that transferring jurisdiction to the dual-use list would also make U.S. controls consistent with treatment of these items under multilateral export control regimes. Manufacturers of satellites supported the transfer of commercial communications satellites to the Commerce Control List. They expressed concern that, under State’s jurisdiction, the satellites were subject to Missile Technology sanctions requiring denial of exports and to congressional notifications. Satellite manufacturers also stated that such satellites are intended for commercial end use and are therefore not munitions subject to State’s licensing process. They also believed that the Commerce process was more responsive to business due to its clearly established time frames and predictability of the licensing process. Satellite manufacturers also expressed the view that some of the militarily sensitive characteristics of communications satellites are no longer unique to military satellites. Defense and State point out that the basis for including items on the munitions list is the sensitivity of the item and whether it has been specifically designed for military applications, not how the item will be used. These officials have expressed concern about disclosure of technical data to integrate the satellite with the launch vehicle because satellite integration technologies can also be applied to launch vehicles that carry ballistic missiles to improve the missiles’ performance and reliability. The process of planning a satellite launch takes several years, and there is concern that technical discussions between U.S. and foreign representatives may lead to the transfer of information on militarily sensitive components. They also expressed concern about the operational capability that specific characteristics, in particular antijam capability, crosslinks, and baseband processing, could give a potential adversary. Defense and State officials said they were particularly concerned about the technologies to integrate the satellite to the launch vehicle because this technology can also be applied to launch ballistic missiles to improve their performance and reliability. Accelerometers, kick motors, separation mechanisms, and attitude control systems are examples of equipment used in both satellites and ballistic missiles. According to State, such equipment and technology merit control for national security reasons. No export license application for a satellite launch has been denied under either the State or Commerce systems. Therefore, the conditions attached to the license are particularly significant. Exports of U.S. satellites for launch in China are governed by a government-to-government agreement addressing technology safeguards. This agreement establishes the basic authorities for the U.S. government to institute controls intended to ensure that sensitive technology is not inadvertently transferred to China. This agreement is one of three government-to-government agreements with China on satellites. The others address pricing and liability issues. During our 1997 review and in recent discussions, officials pointed to two principal safeguard mechanisms to protect technologies. Safeguard mechanisms include technology transfer control plans and the presence of Defense Department monitors during the launch of the satellites. Technology transfer control plans are prepared by the exporter and approved by Defense. The plans outline the internal control procedures the company will follow to prevent the disclosure of technology except as authorized for the integration and launch of the satellite. These plans typically include requirements for the presence of Defense monitors at technical meetings with Chinese officials as well as procedures to ensure that Defense reviews and clears the release of any technical data provided by the company. Defense monitors at the launch help ensure that the physical security over the satellite is maintained and monitor any on-site technical meetings between the company and Chinese officials. Authority for these monitors to perform this work in China is granted under the terms of the government-to-government safeguards agreement. Additional government control may be exercised on technology transfers through State’s licensing of technical assistance and technical data. State technical assistance agreements detail the types of information that can be provided and give Defense an opportunity to scrutinize the type of information being considered for export. Technical assistance agreements, however, are not always required for satellite exports to China. While such licenses were required for satellites licensed for export by State, Commerce-licensed satellites do not have a separate technical assistance licensing requirement. The addition of new controls over satellites transferred to Commerce’s jurisdiction in 1996 addressed some of the key areas where the Commerce procedures are less stringent than those at State. There remain, however, differences in how the export of satellites are controlled under these new procedures. Congressional notification requirements no longer apply, although the Congress is currently notified because of the Tiananmen waiver process. Sanctions do not always apply to items under Commerce’s jurisdiction. For example, under the 1993 Missile Technology sanctions, sanctions were not imposed on satellites that included missile-related components. Defense’s power to influence the decision-making process has diminished since the transfer. When under State jurisdiction, State and Defense officials stated that State would routinely defer to the recommendations of Defense if national security concerns were raised. Under Commerce jurisdiction, Defense must now either persuade a majority of other agencies to agree with its position to stop an export or escalate their objection to the cabinet-level Export Administration Review Board, an event that has not occurred in recent years. Technical information may not be as clearly controlled under the Commerce system. Unlike State, Commerce does not require a company to obtain an export license to market a satellite. Commerce regulations also do not have a separate export commodity control category for technical data, leaving it unclear how this information is licensed. Commerce has informed one large satellite maker that some of this technical data does not require an individual license. The additional controls applied to the militarily sensitive commercial communications satellites transferred to Commerce’s control in 1996 were not applied to the satellites transferred in 1992. These satellites are therefore reviewed under the normal interagency process and are subject to more limited controls. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have. Antennas and/or antenna systems with the ability to respond to incoming interference by adaptively reducing antenna gain in the direction of the interference. Ensures that communications remain open during crises. Allows a satellite to receive incoming signals. An antenna aimed at a spot roughly 200 nautical miles in diameter or less can become a sensitive radio listening device and is very effective against ground-based interception efforts. Provide the capability to transmit data from one satellite to another without going through a ground station. Permit the expansion of regional satellite communication coverage to global coverage and provides source-to-destination connectivity that can span the globe. They are very difficult to intercept and permit very secure communications. Allows a satellite to switch from one frequency to another with an on-board processor. On-board switching can provide resistance to jamming of signals. Scramble signals and data transmitted to and from a satellite. Allow telemetry and control of a satellite, which provide positive control and deny unauthorized access. Certain encryption capabilities have significant intelligence features important to the National Security Agency. Provide protection from natural and man-made radiation environment in space, which can be harmful to electronic circuits. Permit a satellite to operate in nuclear war environments and may enable its electronic components to survive a nuclear explosion. Allows rapid changes when the satellite is on orbit. Military maneuvers require that a satellite have the capability to accelerate faster than a certain speed to cover new areas of interest. Provides a low probability that a signal will be intercepted. High performance pointing capabilities provide superior intelligence-gathering capabilities. Used to deliver satellites to their proper orbital slots. If the motors can be restarted, the satellite can execute military maneuvers because it can move to cover new areas. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the military sensitivity of commercial communications satellites and the implications of the 1996 change in export licensing jurisdiction, focusing on: (1) key elements in the export control systems of the Departments of Commerce and State; (2) how export controls for commercial satellites have evolved over the years; (3) the concerns and issues debated over the transfer of commercial communications satellites to the export licensing jurisdiction of Commerce; (4) the safeguards that may be applied to commercial satellite exports; and (5) observations on the current export control system. GAO noted that: (1) the U.S. export control system--comprised of both the Commerce and State systems--is about managing risk; (2) exports to some countries involve less risk than to other countries and exports of some items involve less risk than others; (3) the planning of a satellite launch with technical discussions and exchanges of information taking place over several years involves risk no matter which agency is the licensing authority; (4) recently, events have focused concern on the appropriateness of Commerce jurisdiction over communication satellites; (5) this is a difficult judgment; (6) by design, Commerce's system gives greater weight to economic and commercial concerns, implicitly accepting greater security risks; and (7) by design, State's systems gives primacy to national security and foreign policy concerns, lessening--but not eliminating--the risk of damage to U.S. national security interests.
Authority for controlling the export of defense items is provided through the Arms Export Control Act, and these exports are regulated through the ITAR by State Department’s Directorate of Defense Trade Controls. While most defense items require a license for export, the ITAR contains numerous exemptions from licensing requirements that have defined conditions and limitations. For exports that directly support DOD activities, such as exports related to defense cooperative programs, exporters may claim an exemption from licensing requirements pursuant to the written request, directive, or approval of DOD. In doing so, DOD certifies that the export appropriately qualifies for one or more of a limited number of applicable ITAR license exemptions. As with all exemptions, the exporter decides whether to export using an exemption and bears ultimate responsibility for complying with requirements in the ITAR. In May 2000, the Administration announced 17 proposals as part of its Defense Trade Security Initiative—an effort to facilitate cross-border defense cooperation and streamline U.S. export controls. One proposal was that DOD make more effective use of ITAR exemptions to facilitate exports that further U.S. government interests in defense cooperation with allies and friendly nations. To clarify exemption use, DOD’s Defense Technology Security Administration (DTSA)—which is responsible for developing and implementing DOD security policies on international transfers of defense-related goods, services, and technologies—issued guidelines in March 2004 for certifying U.S. exporters’ use of certain ITAR exemptions. These guidelines were provided to the military services, given that they are primarily responsible for managing and implementing international defense cooperative programs. In support of defense activities, DOD components prepare letters certifying the use of certain exemptions by exporters under State’s export control regulations. The approach used by the military services for certifying the use of ITAR exemptions is set forth in DOD guidelines. Nonservice DOD components also certify the use of exemptions but are not subject to the guidelines, which were not issued departmentwide. Some of these nonservice components had created or were creating their own guidelines, which could lead to confusion regarding certain certification practices. State, as the regulator of defense exports, has raised concerns about the guidelines not clearly explaining the purpose and scope of the exemptions available to DOD, and State and DOD disagree on contractors’ use of one exemption that has been certified by DOD components. Some ITAR exemptions apply to exports that directly benefit DOD activities, ranging from support of defense cooperative programs, such as the Joint Strike Fighter, to providing equipment and technical services necessary to support U.S. forces in foreign locations. For such exemptions, DOD confirms whether the export activity appropriately qualifies for the use of an exemption and typically documents this confirmation in a written letter directly to the exporter or sometimes to the cognizant DOD program office that the exemption will benefit. Typically, the letters identify the ITAR sections that pertain to the exemption, the type and purpose of the export, the destination country, and a time frame for the export to occur (see fig. 1). In March 2004, DOD issued guidelines to the military services that were intended to provide a level of oversight for the exemption certification process, such as establishing elements of authority and record-keeping requirements. The guidelines included the following procedures for certifying exporters’ use of ITAR exemptions in support of DOD’s activities: Established authorized exemption officials within each service to certify the use of ITAR exemptions. These designated general officers or senior executive service personnel in the military services are responsible for overall management and oversight of the exemption certification process. Provided elements for the certification, to include (1) a tracking number for the certification; (2) ITAR exemption citation number; (3) name of the exporter for whom use of the exemption is certified; (4) the reason/purpose for certifying use of the exemption and benefit to the U.S. government; (5) description of the specific defense article, service, or technical data exempted; (6) conditions and limitations as necessary to establish a clearly defined scope for defense articles, services, and technical data authorized for export and any handling, control, or accountability measures deemed necessary; (7) the foreign end users; and (8) the expiration date—not more than 1 year from date of issue. Required the military services to enter data on exemptions into a centralized DOD database. Stated that DTSA would annually report on the services’ exemption certification data to State. Restated requirements in the ITAR that exemptions may only be certified for use by eligible U.S. persons registered with the Department of State, Director of Defense Trade Controls; and that U.S. persons must comply with ITAR requirements for use of exemptions, including applicable criteria and limitations. DOD certifications do not supersede other ITAR requirements for use of exemptions. Listed five exemptions that relate to exports of defense items—such as technical data pursuant to a written DOD request, shipments of defense items by or for U.S. government agencies, or plant visits (classified or unclassified) (see table 1). DTSA officials stated that DOD has not determined the need for a departmentwide directive or instruction on certifying the use of ITAR exemptions. Because all nonservice DOD components currently are not subject to existing DOD guidelines, officials at some nonservice components that we spoke with had created or were in the process of creating their own exemption guidelines. A lack of common guidelines could lead to inconsistent certification practices. In addition, some confusion exists regarding certain certification practices. For example, an official from one of the four nonservice components questioned whether the component could provide certifications for exporters with which it had contracts or whether the cognizant military service that maintained the overall contract would need to provide the certification. This official continues to certify exemptions for such exporters with which it contracts. State officials, who regulate and control the export of defense items, have raised concerns about DOD’s exemption certification guidelines. Specifically, DTSA provided State with proposed revisions to its guidelines in April 2006, and in response, State provided DTSA with written comments raising concerns with the guidelines. According to senior-level export control officials at both State and DOD, they met to discuss areas of disagreement but were unable to reach resolution. To date, State and DOD have not resolved fundamental areas of concern. First, State disagreed with DOD’s certification of exporters’ use of the exemption under ITAR section 126.4(a). According to State officials, language in this section indicates that the exemption is only designed for use by U.S. government personnel for U.S. government end-use and is not designed to be used by contractors. DTSA disagreed on this point and stated that the section’s phrase “by or for any agency of the U.S. government” indicates that the exemption can be used by contractors when their work is directed by DOD for its own benefit. In the most recent draft iteration of the guidelines, DTSA now plans to further define responsibility for certifying this ITAR exemption, removing some certification responsibility from the military services in an attempt to provide greater control over its use. However, DTSA officials stated that DOD plans to continue to certify the use of this exemption. Second, State indicated that the guidelines to the military services are not clear on the purpose and scope of the exemptions available to DOD. State suggested that DOD revise its guidelines to include (1) ITAR sections 126.6(a) and 126.6(c) on foreign military sales to provide further context, citing that their inclusion would inform the military services that other ITAR exemptions are provided for the exclusive use of DOD in the conduct of its official business, and (2) ITAR section 125.4(b)(3)—the provision of technical data in furtherance of a contract between the exporter and the U.S. government if the contract provides for the export of data—which State identified as one that may be certified by DOD for use by exporters when conducting DOD’s mission. State also noted that the use of each exemption is pursuant to the conditions and terms specified in the ITAR and that the exporter should be directed to the relevant ITAR sections. DTSA officials stated that its guidelines only include those ITAR sections that specifically provide for exemption use for exports at the direction or approval of DOD. DTSA officials further stated that the foreign military sales process is defined separately in the ITAR and that DOD has its own system and process for reporting to State on foreign military sales. (The complete text of cited ITAR sections under discussion can be found in app. II.) Finally, State suggested that DTSA be the certifying entity for all other DOD components outside of the military services and that all certifying organizations be trained in the evaluation of certification requests and the application of DOD guidelines. DTSA officials plan to include in the revised guidelines a provision that nonservice DOD components seek guidance from their respective general counsel, as is the current practice. DOD is in the process of revising its guidelines, which are set to expire in December 2007. These revisions are partially in response to State’s concerns, and DOD is planning to submit them to State for its review. However, to date, State and DOD officials have not reached agreement on these issues, and the lack of common understanding of regulatory exemption use could result in inconsistent application of the regulations. On the basis of over 1,100 certification letters that DOD components provided to us and our review of them, DOD components certified the use of over 1,900 exemptions for multiple companies and various programs from 2004 through 2006. Most of the exemptions were for exports of technical data or services for Air Force or missile defense programs and for exports to long-standing allies. We identified a number of DOD components that certified the use of ITAR exemptions by exporters. These components varied widely in the number of certifications they issued—ranging from 24 to more than 1,040. Table 2 summarizes highlights of our analysis of export exemptions certified by various DOD components. Of the components we identified, the Missile Defense Agency and the Air Force provided us about 80 percent of the exemption certification letters that we reviewed. Almost all of the certifications were for the export of technical data or for the temporary export of defense items “by or for any agency of the U.S. government for official use by such an agency.” About half of the certifications were for the use of ITAR section 125.4(b)(1) for the export of technical data, including classified information, typically related to a particular program, such as Joint Strike Fighter or Upgraded Early Warning Radar, with allies during discussions at scheduled meetings or participation in technical conferences in foreign locations. In addition, almost 30 percent of the certifications were for ITAR section 126.4(a), such as exports of technical data, defense services, or hardware in support of joint military exercises. ITAR section 126.4(a) is the one that State and DOD disagree on its use by contractors. An additional 19 percent of the certifications cited both of these ITAR sections. Less than 3 percent of the ITAR exemptions identified were for transfers of software and hardware— primarily for use by U.S. forces outside of the United States, sometimes in support of operations in Iraq. Twenty-one of the certifications issued by two nonservice components cited ITAR section 125.4(b)(3)—technical data in furtherance of a contract between the exporter and the U.S. government if the contract provides for the export of data. More than 270 exporters, including prime contractors and subcontractors, were identified in exemption certification letters we reviewed. The most frequently identified exporters were defense contractors, but university laboratories and federally funded research and development centers were also identified. Four major defense contractors represented one-fourth of the exemption certifications, with one receiving over 200 certifications. However, more than 80 percent of the exporters were identified five or fewer times in the certifications we reviewed from DOD components. A total of 266 different programs and activities were identified in the certifications, with the Missile Defense Agency having the largest number. Over 90 foreign destinations, including NATO, were identified on DOD certifications. The most frequently cited destination country was the United Kingdom—identified 900 times. Thirteen countries were only identified once as exempted export destinations—some of which were situations in which U.S. entities located in the countries were the recipients, not the foreign government or industry of that country. Some certifications were for exports to multiple countries within one geographic region, such as Latin America. State and DOD lack comprehensive data to oversee the use of DOD- certified exemptions, limiting their knowledge of defense activities under this process. DOD’s annual report to State on the use of exemptions captures data from the military services but not from other DOD components. In addition, the data may not capture the magnitude of transfers certified for exemption use. Specifically, we found that one DOD component used one letter to certify multiple companies’ use of an ITAR exemption during a 1-year period. This information was not included in the DOD component’s reporting on exemption certification use. In addition, we found that some of the certification letters that we reviewed lacked key information that could be helpful in overseeing exemptions certified by DOD components. The DOD exemption guidelines state that the military services must record the exemption in a centralized DOD database. However, nonservice components, such as the Missile Defense Agency—which had the largest number of certifications from 2004 through 2006—do not record their exemption certification data in the centralized DOD database, known as USXPORTS. Instead, the nonservice components retain their own records on certified exemptions. In addition, DOD guidelines provide that DOD submit a report to State on exemptions certified for use on an annual basis. In July 2007, DOD submitted its 2006 report to State based on the data contained in USXPORTS. However, the utility of DOD’s report to State on exemption use is limited in several areas. First, since DOD collects data for only the military services, its exemption report to State does not provide total exemption data for all DOD components. Specifically, DOD’s report to State contained data on 161 certification letters issued by the military services in 2006; for the same year, we collected an additional 271 letters from nonservice components. Second, for each certification letter, the report contains (1) a certification tracking number, (2) certification date, (3) certifying organization, (4) destination country or countries, (5) description of export, and (6) exporter name. However, it does not contain other information that the DOD guidelines specify for inclusion in the certification letters and maintenance in the military services’ records on exemption certifications, such as which ITAR exemption is being certified and the expiration date for each exemption. Therefore, State does not have a complete report for the exemptions certified for use by all of DOD’s components. While the certification letters we reviewed frequently contained the information as called for in the DOD exemption guidelines, some differences existed that resulted in DOD not having insight into the magnitude of transfers certified for exemption use. For example, for its Joint Strike Fighter program, the Air Force issues an annual certification letter—to more than 50 companies—that certifies their use of one ITAR exemption for the purposes of responding to written requests from DOD for a quote or bid proposal. These letters are broad in scope and do not specify what technical data would be released for the program. When these companies listed on the certification letter cite a specific need for using this exemption throughout the year for an export, they submit their request directly to the program office. However, because the Air Force does not require the program office to report these specific data on these program office approvals for transfers, it is likely that the Air Force lacks comprehensive knowledge on exports transferred under the use of this ITAR exemption. From 2004 through 2006, we found that the Joint Strike Fighter program had authorized the release of more than 600 transfers of technical data under the quote or bid proposal exemption containing specific information—such as the types of technical data and related drawings exported and the frequency of these exports—which the Air Force lacks in its central record keeping on exemptions. Further, DOD’s 2006 report to State includes only the certification letter that was broad in scope, but it does not include the magnitude of transfers under this certification. We found some variations in the type of information contained in the certification letters provided by the military services and non-service components, which can lessen DOD’s insight into the specific export activities that DOD is certifying. For example, 163 did not specify whether the foreign export destination entity was a foreign government, foreign industry, or U.S. entity. Over 70 certifications—about 4 percent of the total certifications we reviewed—did not contain an expiration date for the exemption; for the remainder, the length of coverage from certification date to expiration date ranged from less than 1 day to more than 4.5 years. The scope of the letters ranged from covering one exporter to covering multiple exporters, and about 30 percent of the certifications that we reviewed included subcontractors. While covering more than one exporter under one certification letter may create some workforce efficiencies, it could limit DOD’s insight into exporters receiving exemption certifications. In addition, this practice, while not specifically addressed in the DOD guidelines, has raised concern among some contractors that doing so could blur transparency and create some liability issues. Export control officials from each of the four companies we spoke with said they would prefer that each subcontractor have a separate certification letter from DOD to provide a clearer record of, and decrease their liability for, subcontractors’ exports. While the exemption certification process is one way to facilitate defense cooperation with friendly nations and allies, the U.S. government needs a consistent approach to and knowledge of defense export activities certified through this process. However, State and DOD have disparate understandings of regulatory exemption use and guidance, and efforts to resolve these differences have proven unsuccessful. Further, neither State nor DOD has complete and accurate data to obtain sufficient knowledge of the extent to which all DOD components are certifying the use of exemptions for the export of defense items. Therefore, State and DOD cannot readily identify in total and on a program-by-program basis the defense items that DOD has certified for exemption use in support of DOD’s activities. To ensure a common understanding of the use of ITAR exemptions available for DOD’s activities, we recommend that the Secretary of State direct the Deputy Assistant Secretary for the Directorate of Defense Trade Controls and the Secretary of Defense direct the Director of the Defense Technology Security Administration to establish a work group to define and resolve disagreements on exemption use and guidelines and to document decisions reached. If the work group cannot reach agreement before the existing DOD exemption guidelines expire, then it should elevate the matter for resolution within its appropriate chain of command. If needed, the Secretary of State should direct the Deputy Assistant Secretary for the Directorate of Defense Trade Controls to revise the ITAR to incorporate any necessary changes. Once agreement is reached, the Secretary of Defense, with concurrence from the Secretary of State, should direct that the guidelines be revised and made applicable to all DOD components. We are also recommending that the Secretary of Defense should direct the Director of the Defense Technology Security Administration to ensure that the revised exemption certification guidelines provide the appropriate mechanisms for overseeing the exemption certification process, such as the collection of data from all DOD components on exemptions they certified. The Departments of Defense and State provided comments on a draft of this report. DOD also provided technical comments, which we incorporated as appropriate. In commenting on our first recommendation, Defense and State both concurred with the need to establish a work group to define and resolve disagreements on exemption use and guidelines, and to document decisions reached. State indicated that initial discussions with DOD have begun, and DOD stated that it plans to codify the understandings in a clear set of guidelines to be issued to DOD components. In its comments on our second recommendation, DOD did not agree, stating that there is no existing mechanism whereby the U.S. government can collect data from exporters to monitor exports of defense items made under exemptions. DOD further stated that such a mechanism would exceed DOD’s existing statutory and regulatory authority because the ultimate responsibility for obtaining appropriate authorization to export defense items rests with the exporter. Our intent was not for DOD to collect information directly from exporters. Instead, our recommendation is intended for DOD to expand its existing data collection of exemptions certified by the military services to include those from all DOD components. To clarify this intent, we modified the language in the recommendation. Formal written comments provided by DOD and State are reprinted in appendixes III and IV, respectively. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then provide copies of this report to interested congressional committees, as well as the Secretaries of Defense and State; the Attorney General; the Director, Office of Management and Budget; and the Assistant to the President for National Security Affairs. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 or [email protected] if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Others making key contributions to this report are listed in appendix V. To describe the approach used by the Department of Defense (DOD) for certifying the use of export exemptions for exporters, we reviewed export control regulations and discussed with the Department of State and DOD their interpretation of when DOD can certify that a specific export activity qualifies for the use of certain export exemptions. We reviewed DOD’s exemption certification guidelines, State’s response to the guidelines, DOD components’ internal guidance on exemption certifications, and DOD components’ practices. We also interviewed officials from State’s Directorate of Defense Trade Controls—from the Licensing, Compliance, Management, and Policy offices, and DOD’s Defense Technology Security Administration about their views on the International Traffic in Arms Regulations (ITAR) allowances for exemption use by exporters, requirements for DOD to direct the use of certain ITAR exemptions, and practices by DOD components that certify the use of exemptions by exporters in support of DOD activities. We collected and summarized DOD export exemption certification letters for calendar years 2004 through 2006 to summarize the use of DOD- certified exemptions. We selected 2004 because the Defense Technology Security Administration (DTSA) issued guidelines on the certification process to the military services in that year. Prior to 2004, no formal procedures existed for designating senior-level personnel in the military departments for the authorization of ITAR exemption certifications. Through interviews with knowledgeable State and DOD officials, we created a list of DOD components potentially certifying the use of exemptions. While this coordination helped identify the DOD components certifying the use of exemptions, there may be other components that were not included in this list and additional exemption letters might exist. We then contacted the DOD components on our list to ask if they certified exemptions between 2004 and 2006. While some components on our list stated that they did not certify export exemptions, we collected certification letters from those DOD components that did certify the use of exemptions—the Air Force, Army, Navy, Missile Defense Agency, National Geospatial-Intelligence Agency, National Security Agency, and the Office of Acquisition, Technology, and Logistics. We created a database to summarize information provided in these certification letters, such as the exemptions certified, types of exports, and foreign recipients. In some cases, DOD components provided separate letters for each exporter receiving an exemption certification for an activity, while other components combined all exporters onto one exemption certification letter for an activity. Therefore, to get an equitable count, we separated individual companies from exemption certifications granted for multiple companies on one letter. The total number of exemption certification letters provided by the DOD components to us was 1,142. After separating out the individual companies from the certification letters, the total number of ITAR exemptions certified by the DOD components for the calendar years 2004 through 2006 totaled 1,960. To examine the extent to which State and DOD oversee the use of export exemptions certified by DOD, we reviewed DOD’s 2006 report on certified exemptions provided to State in July 2007. We also interviewed DTSA officials about the USXPORTS automation system and what it contains. To gain a DOD acquisition program office perspective, we interviewed the Joint Strike Fighter Program Office about its exemption certification processes and practices. We compared the data of the program office with the data from the cognizant military service. We also interviewed officials from four of the companies—Boeing, Lockheed Martin, Northrop Grumman, and Raytheon—who most often received certification letters to gain their perspective on DOD components’ processes for and guidance to exporters on DOD-certified exemptions. We examined the certification letters we obtained from DOD components and identified differences among information contained in the letters. We discussed in this report a number of ITAR subparts and sections that are cited in DOD certification letters, identified in DOD’s exemption certification guidelines, or are under discussion between State and DOD. These ITAR sections are cited below in their entirety. In addition to the contact name above, Anne-Marie Lasowski, Assistant Director; Lisa Gardner; Sharron Candon; Peter Grana; Arthur James, Jr.; Karen Sloan; and Marie Ahearn made key contributions to this report. Export Controls: Vulnerabilities and Inefficiencies Undermine System’s Ability to Protect U.S. Interests. GAO-07-1135T. Washington, D.C.: July 26, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 31, 2007. Export Controls: Challenges Exist in Enforcement of an Inherently Complex System. GAO-07-265. Washington, D.C.: December 20, 2006. Defense Technologies: DOD’s Critical Technologies List Rarely Informs Export Control and Other Policy Decisions. GAO-06-793. Washington, D.C.: July 28, 2006. Export Controls: Improvements to Commerce’s Dual-Use System Needed to Ensure Protection of U.S. Interests in the Post-9/11 Environment. GAO-06-638. Washington, D.C.: June 26, 2006. Joint Strike Fighter: Management of the Technology Transfer Process. GAO-06-364. Washington, D.C.: March 14, 2006. Defense Trade: Arms Export Control Vulnerabilities and Inefficiencies in the Post-9/11 Security Environment. GAO-05-468R. Washington, D.C.: April 7, 2005. Defense Trade: Arms Export Control System in the Post-9/11 Environment. GAO-05-234. Washington, D.C.: February 16, 2005. Foreign Military Sales: DOD Needs to Take Additional Actions to Prevent Unauthorized Shipments of Spare Parts. GAO-05-17. Washington, D.C.: November 9, 2004. Nonproliferation: Improvements Needed to Better Control Technology Exports for Cruise Missiles and Unmanned Aerial Vehicles. GAO-04-175. Washington, D.C.: January 23, 2004. Export Controls: Post-Shipment Verification Provides Limited Assurance That Dual-Use Items Are Being Properly Used. GAO-04-357. Washington, D.C.: January 12, 2004. Nonproliferation: Strategy Needed to Strengthen Multilateral Export Control Regimes. GAO-03-43. Washington, D.C.: October 25, 2002. Export Controls: Processes for Determining Proper Control of Defense- Related Items Need Improvement. GAO-02-996. Washington, D.C.: September 20, 2002. Export Controls: Department of Commerce Controls over Transfers of Technology to Foreign Nationals Need Improvement. GAO-02-972. Washington, D.C.: September 6, 2002. Export Controls: More Thorough Analysis Needed to Justify Changes in High-Performance Computer Controls. GAO-02-892. Washington, D.C.: August 2, 2002. Export Controls: Rapid Advances in China’s Semiconductor Industry Underscore Need for Fundamental U.S. Policy Review. GAO-02-620. Washington, D.C.: April 19, 2002. Defense Trade: Lessons to Be Learned from the Country Export Exemption. GAO-02-63. Washington, D.C.: March 29, 2002. Export Controls: Issues to Consider in Authorizing a New Export Administration Act. GAO-02-468T. Washington, D.C.: February 28, 2002. Export Controls: System for Controlling Exports of High Performance Computing Is Ineffective. GAO-01-10. Washington, D.C.: December 18, 2000. Defense Trade: Analysis of Support for Recent Initiatives. GAO/NSIAD-00-191. Washington, D.C.: August 31, 2000. Defense Trade: Status of the Department of Defense’s Initiatives on Defense Cooperation. GAO/NSIAD-00-190R. Washington, D.C.: July 19, 2000. Export Controls: Better Interagency Coordination Needed on Satellite Exports. GAO/NSIAD-99-182. Washington, D.C.: September 17, 1999. Export Controls: Some Controls over Missile-Related Technology Exports to China Are Weak. GAO/NSIAD-95-82. Washington, D.C.: April 17, 1995. Export Controls: Actions Needed to Improve Enforcement. GAO/NSIAD-94-28. Washington, D.C.: December 30, 1993.
Defense (DOD) activities, U.S. defense companies may export defense items. The Department of State (State) controls such exports through its International Traffic in Arms Regulations (ITAR), which provides for some exemptions from export licensing requirements. For a limited number of these exemptions, DOD may confirm--or certify--that the export activity qualifies for the use of an ITAR exemption. As part of an initiative, DOD is to make more effective use of ITAR exemptions, but little is known about the extent to which this is done. This report (1) describes DOD's approach for certifying exporters' exemption use in support of defense activities, (2) summarizes the use of selected DOD-certified exemptions, and (3) examines State and DOD's oversight of exemption use. GAO's findings are based on its review of export control law, regulation, and DOD guidelines; interviews with State, DOD, and defense industry officials; and a GAO-developed database of DOD certification letters. In support of defense activities, DOD prepares letters certifying that a proposed export qualifies for the use of certain ITAR exemptions by exporters. To guide this approach, DOD issued exemption certification guidelines in March 2004 to the military services because they are the DOD components primarily responsible for managing and implementing defense international cooperative programs. However, GAO found other DOD components that also certify the use of exemptions in support of international activities but are not subject to the DOD guidelines. Officials from State, which regulates and controls defense exports, have raised several concerns to DOD about its guidelines, including the use of one ITAR exemption by contractors and the comprehensiveness of the guidelines. While State and DOD officials have met and exchanged correspondence on these issues, to date, they have not resolved fundamental disagreements. A lack of common understanding of regulatory exemption use could result in inconsistent application of the regulations. The exemption certification letters from DOD components that we reviewed showed that over 1,900 exemptions were certified for about 270 exporters in calendar years 2004 through 2006. The majority of the certifications related to missile defense and Air Force programs and included the export of technical data. While most of the exporters identified in the DOD-certified exemption letters were defense contractors, other exporters included university laboratories and federally funded research and development centers. The United Kingdom, Australia, Canada, and the North Atlantic Treaty Organization were the most frequently cited recipients for exports under exemptions certified by DOD components. State and DOD lack comprehensive data to oversee the use of DOD-certified exemptions, limiting their knowledge of defense activities under this process. While DOD's guidelines provide for annual reporting to State on certified exemptions, this report captures data from the military services, but not from other DOD components. GAO identified 271 letters from nonservice components that were not included in DOD's 2006 report to State. In addition, DOD's report to State may not capture the magnitude of transfers certified for exemption use. For example, one letter that GAO reviewed certified the use of an exemption for more than 50 companies, but only the certification letter--not the actual transfers, which totaled 600 over a 3-year period--was captured in the cognizant military service's record keeping on exemption certifications. Furthermore, the details on these transfers were not included in DOD's report to State, limiting insight into the number of transfers under this certification.
CBP’s ability to inspect travelers at our nation’s ports of entry has been hampered by weaknesses in travel inspection procedures, inadequate physical infrastructure, and lack of staff at the air, land, and sea ports of entry. The use of fraudulent identity and citizenship documents by some travelers to the United States as well as limited availability or use of technology and lack of timely and recurring training have also hampered CBP’s efforts in carrying out thorough inspections. DHS has taken several actions to implement WHTI at air, land, and sea ports of entry nationwide so that it can better secure the border by requiring citizens of the United States, Bermuda, Canada, and Mexico to present documents to show identity and citizenship when entering the United States from certain countries in North, Central, or South America. DHS plans to move forward to deploy technology to implement WHTI at land ports of entry, and staff and train officers to use it. Finally, DHS has enhanced border security by deploying US-VISIT biometric entry capability at over 300 air, sea, and land ports of entry nationwide, but the prospects for successfully delivering an operational exit solution remain uncertain because DHS has not detailed how it plans to develop and deploy an exit capability at the ports. Each year individuals make hundreds of millions of border crossings into the United States through the 326 land, air, and sea ports of entry. About three-fourths of these crossings occur at land ports of entry. In November 2007, we reported that while CBP has had some success in interdicting inadmissible aliens and other violators, weaknesses in its traveler inspection procedures and related physical infrastructure increase the potential that dangerous people and illegal goods could enter the country. For example, CBP’s analyses indicated that several thousand inadmissible aliens and other violators entered the country at land and air ports of entry in fiscal year 2006. One factor that contributed to failed inspections was weaknesses in travel inspection procedures. In mid-2006, CBP reviewed videotapes from about 150 large and small ports of entry and, according to CBP officials, determined that while CBP officers carried out thorough traveler inspections in many instances, they also identified numerous instances where traveler inspections at land ports of entry were weak in that they did not determine the citizenship and admissibility of travelers entering the country as required by law, such as officers not stopping vehicles for inspection and pedestrians crossing the border without any visual or verbal contact from a CBP officer despite operating procedures that required officers to do so. In the summer of 2006, CBP management took actions to place greater management emphasis on traveler inspections by holding meetings with senior management to reinforce the importance of carrying out effective inspections and by providing training to all supervisors and officers on the importance of interviewing travelers, checking travel documents, and having adequate supervisory presence. However, tests our investigators conducted in October 2006 and January 2007—as many as 5 months after CBP issued guidance and conducted the training—showed similar weaknesses as those on the videotape were still occurring in traveler inspections at ports of entry. At two ports, our investigators were not asked to provide a travel document to verify their identity—a procedure that management had called on officers to carry out—as part of the inspection. The extent of continued noncompliance is unknown, but these results point to the challenge CBP management faces in ensuring its directives are carried out. In July 2007, CBP issued new internal policies and procedures for agency officials responsible for its traveler inspection program at land ports of entry. The new policies and procedures require field office managers to conduct periodic audits and assessments to ensure compliance with the new inspection procedures. However, they do not call on managers to share the results of their assessments with headquarters management. Without this communication, CBP management may be hindering its ability to efficiently use the information to overcome weaknesses in traveler inspections. Another weakness involved inadequate physical infrastructure. While we could not generalize our findings, at several ports of entry of entry that we examined, barriers designed to ensure that vehicles pass through a CBP inspection booth were not in place, increasing the risk that vehicles could enter the country without inspection. CBP recognizes that it has infrastructure weaknesses and has estimated it needs about $4 billion to make the capital improvements needed at all 163 land crossings. CBP has prioritized the ports with the greatest need. Each year, depending upon funding availability, CBP submits its proposed capital improvement projects based upon the prioritized list it has developed. Several factors affect CBP’s ability to make improvements, including the fact that some ports of entry are owned by other governmental or private entities, potentially adding to the time needed to agree on infrastructure changes and put them in place. As of September 2007, CBP had infrastructure projects related to 20 different ports of entry in various stages of development. Lack of inspection staff was also a problem. Based upon a staffing model it developed, CBP estimated it may need several thousand more CBP officers at its ports of entry. According to CBP field officials, lack of staff affected their ability to carry out border security responsibilities. For example, we examined requests for resources from CBP’s 20 field offices and its pre-clearance headquarters office for January 2007 and reported that managers at 19 of the 21 offices cited examples of anti-terrorism activities not being carried out, new or expanded facilities that were not fully operational, and radiation monitors and other inspection technologies not being fully used because of staff shortages. At seven of the eight major ports we visited, officers and managers told us that not having sufficient staff contributes to morale problems, fatigue, lack of backup support, and safety issues when officers inspect travelers— increasing the potential that terrorists, inadmissible travelers, and illicit goods could enter the country. CBP also had difficulty in providing required training to its officers. CBP developed 37 courses on such topics as how to carry out inspections and detect fraudulent documents and has instituted national guidelines for a 12-week on-the-job training program that new officers should receive at land ports of entry. However, managers at seven of the eight ports of entry we visited said that they were challenged in putting staff through training because staffing shortfalls force the ports to choose between performing port operations and providing training. Lastly, although CBP has developed strategic goals that call for, among other things, establishing ports of entry where threats are deterred and inadmissible people and goods are intercepted—a key goal related to traveler inspections—it faces challenges in developing a performance measure that tracks progress in achieving this goal. We made a number of recommendations to the Secretary of Homeland Security to help address weaknesses in traveler inspections, challenges in training, and problems with using performance data. DHS said it is taking steps to address our recommendations. We also reported that CBP’s ability to do thorough inspections is made more difficult by a lack of technology and training to help CBP officers identify foreign nationals who attempt to enter the United States using fraudulent travel documents. In July 2007, we reported that although the State Department had improved the security features in the passports and visas it issues, CBP officers in primary inspection—the first and most critical opportunity at U.S. ports of entry to identify individuals seeking to enter the United States with fraudulent travel documents—were unable to take full advantage of the security features in passports and visas. This was due to (1) limited availability or use of technology at primary inspection and (2) lack of timely and recurring training on the security features and fraudulent trends for passports and visas. For example, at the time of our review, DHS had provided the technology tools to make use of the electronic chips in electronic passports, also known as e-passports, to the 33 airports of entry with the highest volume of travelers from Visa Waiver Program countries. However, not all inspection lanes at these air ports of entry had the technology nor did the remaining ports of entry. Further, CBP did not have a process in place for primary inspection officers to utilize the fingerprint features of visas, including Border Crossing Cards (BCC) which permit limited travel by Mexican citizens— without additional documentation—25 miles inside the border of the United States (75 miles if entering through certain ports of entry in Arizona) for fewer than 30 days. For example, although BCC imposter fraud is fairly pervasive, primary officers at southern land ports of entry were not able to use the available fingerprint records of BCC holders to confirm the identity of travelers and did not routinely refer BCC holders to secondary inspection, where officers had the capability to utilize fingerprint records. Moreover, training materials provided to officers were not updated to include exemplars—genuine documents used for training purposes—of the e-passport and the emergency passport in advance of the issuance of these documents. As a consequence, CBP officers were not familiar with the look and feel of security features in these new documents before inspecting them. Without updated and ongoing training on fraudulent document detection, officers told us they felt less prepared to understand the security features and fraud trends associated with all valid generations of passports and visas. Although CBP faces an extensive workload at many ports of entry and has resource constraints, there are opportunities to do more to utilize the security features in passports and visas during the inspection process to detect their fraudulent use. We recommended that the Secretary of Homeland Security make better use of the security features in passports and visas in the inspection process and improve training for inspection officers on the features and fraud trends for these travel documents. We recommended that DHS take steps, including developing a schedule for deploying technology to other ports of entry and updating training. DHS generally concurred with our recommendations and outlined actions it had taken or planned to take to implement them. We currently have work ongoing to examine DHS efforts to identify and mitigate fraud associated with DHS documents used for travel and employment verification purposes, such as the Permanent Resident Card and the Employment Authorization Document. We expect to issue a report on efforts to address fraud with these DHS documents later this year. One of the major challenges for CBP officers at our nation’s ports of entry is the ability to determine the identity and citizenship of those who present themselves for inspection. For years, millions of citizens of the United States, Canada, and Bermuda could enter the United States from certain parts of the Western Hemisphere using a wide variety of documents, including a driver’s license issued by a state motor vehicle administration or a birth certificate, or in some cases for U.S. and Canadian citizens, without showing any documents. To help provide better assurance that border officials have the tools and resources to establish that people are who they say they are, section 7209 of the Intelligence Reform and Terrorism Prevention Act of 2004, as amended, requires the Secretary of Homeland Security, in consultation with the Secretary of State, to develop and implement a plan that requires a passport or other document or combination of documents that the Secretary of Homeland Security deems sufficient to show identity and citizenship for U.S. citizens and citizens of Bermuda, Canada, and Mexico when entering the United States from certain countries in North, Central, or South America. DHS’ and the State Department’s effort to specify acceptable documents and implement these document requirements is called the Western Hemisphere Travel Initiative (WHTI). In May 2006, we reported that DHS and State had not made decisions about what documents would be acceptable, had not begun to finalize those decisions, and were in the early stages of studying costs and benefits of WHTI. In addition, DHS and State needed to choose a technology to use with the new passport card—which State is developing specifically for WHTI. DHS also faced an array of implementation challenges, including training staff and informing the public. In December 2007, we reported that DHS and State had taken important actions toward implementing WHTI document requirements. DHS and State had taken actions in the five areas we identified in our 2006 report: DHS and State published a final rule for document requirements at air ports of entry. The agencies also published a notice of proposed rule making for document requirements at land and sea ports of entry. By publishing a final rule for document requirements at air ports of entry, DHS and State have established acceptable documents for air travel. DHS has also published a notice of proposed rule making which includes proposed documents for land and sea travel. Under current law, DHS cannot implement WHTI land and sea document requirements until June 1, 2009, or 3 months after the Secretary of Homeland Security and the Secretary of State have certified compliance with specified requirements, whichever is later. In the meantime, in January 2008, CBP ended the practice of oral declaration. According to CBP, until the WHTI document requirements are fully implemented, all U.S. and Canadian citizens are required to show one of the documents described in the proposed rule or a government issued photo identification, such as a driver’s license, and proof of citizenship, such as a birth certificate. DHS has performed a cost-benefit study, but data limitations prevented DHS from quantifying the precise effect that WHTI will have on wait times at land ports of entry—a substantial source of uncertainty in its analysis. DHS plans to do baseline studies at selected ports before WHTI implementation so that it can compare the effects of WHTI document requirements on wait times after the requirements are implemented. DHS and State have selected technology to be used with the passport card. To support the card and other documents that use the same technology, DHS is planning technological upgrades at land ports of entry. These upgrades are intended to help reduce traveler wait times and more effectively verify identity and citizenship. DHS has outlined a general strategy for the upgrades at the 39 highest volume land ports, beginning in January 2008 and continuing over roughly the next 2 years. DHS has developed general strategies for implementing WHTI—including staffing and training. According to DHS officials, they also planned to work with a contractor on a public relations campaign to communicate clear and timely information about document requirements. In addition, State has approved contracting with a public relations firm to assist with educating the public, particularly border resident communities about the new passport card and the requirements of WHTI in general. Earlier this year, DHS selected a contractor for the public relations campaign and began devising specific milestones and deadlines for testing and deploying new hardware and training officers on the new technology. Another major initiative underway at the ports of entry is a program designed to collect, maintain, and share data on selected foreign nationals entering and exiting the United States at air, sea, and land ports of entry, called the US-VISIT Program. These data, including biometric identifiers like digital fingerprints, are to be used to screen persons against watch lists, verify identities, and record arrival and departure. The purpose of US-VISIT is to enhance the security of U.S. citizens and visitors, facilitate legitimate travel and trade, ensure the integrity of the U.S. immigration system, and protect visitors’ privacy. As of October 2007, after investing about $1.5 billion since 2003, DHS has delivered essentially one-half of US-VISIT, meaning that biometrically enabled entry capabilities are operating at more than 300 air, sea, and land ports of entry, but comparable operational exit capabilities are not. That is, DHS still does not have the other half of US-VISIT (an operational exit capability) despite the fact that its funding plans have allocated about one- quarter of a billion dollars since 2003 to exit-related efforts. To the department’s credit, operational entry capabilities have produced results, including, as of June 2007, more than 1,500 people having adverse actions, such as denial of entry, taken against them. Another likely consequence is the deterrent effect of having an operational entry capability, which officials have cited as a byproduct of having a publicized capability at the border to screen entry on the basis of identity verification and matching against watch lists of known and suspected terrorists. Related to identity verification, DHS has also taken steps to implement US- VISIT’s Unique Identity program to enable CBP and other agencies to be better equipped to identify persons of interest and generally enhance law enforcement. Integral to Unique Identity is the capability to capture 10 fingerprints and match them with data in DHS and FBI databases. The capability to capture and match 10 fingerprints at ports of entry is not only intended to enhance CBP’s ability to verify identity, but, according to DHS, is intended to quicken processing times and eliminate the likelihood of misidentifying a traveler as being on a US-VISIT watchlist. Nonetheless, the prospects for successfully delivering an operational exit solution remain uncertain. In June 2007, we reported that DHS’s documentation showed that, since 2003, little has changed in how DHS is approaching its definition and justification of future US-VISIT exit efforts. As of that time, DHS indicated that it intended to spend about $27.3 million on air and sea exit capabilities. However, it had not produced either plans or analyses that adequately defined and justified how it intended to invest these funds. Rather, it had only described in general terms near-term deployment plans for biometric exit capabilities at air and sea ports of entry. Beyond this high-level schedule, no other exit program plans were available that defined what would be done by what entities and at what cost. In the absence of more detailed plans and justification governing its exit intentions, it is unclear whether the department’s efforts to deliver near-term air and sea exit capabilities will produce results different from the past. The prospect for an exit capability at land ports of entry is also unclear. DHS has acknowledged that a near-term biometric solution for land ports of entry is currently not feasible. According to DHS, at this time, the only proven technology available for biometric land exit verification would necessitate mirroring the processes currently in use for entry at these ports of entry, which would create costly staffing demands and infrastructure requirements, and introduce potential trade, commerce, and environmental impacts. A pilot project to examine an alternative technology at land ports of entry did not produce a viable solution. US- VISIT officials stated that they believe that technological advances over the next 5 to 10 years will make it possible to utilize alternative technologies that provide biometric verification of persons exiting the country without major changes to facility infrastructure and without requiring those exiting to stop and/or exit their vehicles, thereby precluding traffic backup, congestion, and resulting delays. US-VISIT also faces technological and management challenges. In March 2007, we reported that while US-VISIT has improved DHS’s ability to process visitors and verify identities upon entry, we found that management controls in place to identify and evaluate computer and other operational problems at land ports of entry were insufficient and inconsistently administered. In addition, DHS had not articulated how US-VISIT is to strategically fit with other land border security initiatives and mandates and could not ensure that these programs work in harmony to meet mission goals and operate cost effectively. DHS had drafted a strategic plan defining an overall immigration and border management strategy and the plan has been under review by OMB. Further, critical acquisition management processes had not been established to ensure that program capabilities and expected mission outcomes are delivered on time and within budget. These processes include effective project planning, requirements management, contract tracking and oversight, test management, and financial management. We currently have work underway examining DHS’ strategic solution, including a comprehensive exit capability, and plan to issue a report on the results of our work in Spring 2008. As part of its Secure Border Initiative (SBI), DHS recently announced final acceptance of Project 28, a $20.6 million dollar project designed to secure 28 miles of southwestern border. However, DHS officials said that the project did not fully meet agency expectations and will not be replicated. Border Patrol agents in the Project 28 location have been using the system since December 2007 and 312 agents had received updated training. Still, some had not been trained to use the system at all. Deployment of fencing along the southwest border is on schedule, but meeting CBP’s December 2008 goal to deploy 370 miles of pedestrian and 300 miles of vehicle fencing will be challenging because of factors that include difficulties acquiring rights to border land and an inability to estimate costs for installation. Besides undergoing technological and infrastructure improvements along the border, the Border Patrol has experienced unprecedented growth and plans to increase its number of agents by 6,000 by December 2008. Border Patrol officials are confident that the academy can accommodate this influx but are also concerned about the sectors’ ability to provide sufficient field training. In November 2005, DHS announced the launch of SBI aimed at securing U.S. borders and reducing illegal immigration. Elements of SBI are to be carried out by several organizations within DHS. One component is CBP’s SBI program office which is responsible for developing a comprehensive border protection system using people, technology, known as SBInet, and tactical infrastructure—fencing, roads, and lighting. In February 2008, we testified that DHS had announced its final acceptance of Project 28, a $20.6 million project to secure 28 miles along the southwest border, and was gathering lessons learned to inform future border security technology development. The scope of the project, as described in the task order between DHS and Boeing—the prime contractor DHS selected to acquire, deploy, and sustain the SBInet system across the U.S. borders—was to provide a system with the detection, identification, and classification capabilities required to control the border, at a minimum, along 28 miles in the Border Patrol’s Tucson sector. After working with Boeing to resolve problems identified with Project 28, DHS formally accepted the system, noting that it met contract requirements. Officials from the SBInet program office said that although Project 28 did not fully meet their expectations, they are continuing to develop SBInet with a revised approach and have identified areas for improvement based on their experience with Project 28. For example, both SBInet and Border Patrol officials reported that Project 28 was initially designed and developed by Boeing with limited input from the Border Patrol, whose agents are now operating Project 28 in the Tucson sector; however, they said that future SBInet development will include increased input from the intended operators. The schedule for future deployments of technology to the southwest border that are planned to replace most Project 28 capabilities has been extended and officials estimated that the first planned deployment of technology will occur in other areas of the Tucson sector by the end of calendar year 2008. In February 2008, the SBI program office estimated that the remaining deployments of the first phase of technology development planned for the Border Patrol’s Tucson, Yuma, and El Paso sectors are expected to be completed by the end of calendar year 2011. Border Patrol agents in the Project 28 location have been using the system as they conduct their border security activities since December 2007, and as of January 2008, 312 agents in the Project 28 location had received updated training. According to Border Patrol agents, while Project 28 is not an optimal system to support their operations, it has provided them with greater technological capabilities—such as improved cameras and radars—than the legacy equipment that preceded Project 28. Not all of the Border Patrol agents in the Project 28 location have been trained to use the system’s equipment and capabilities, as it is expected to be replaced with updated technologies developed for SBInet. Deployment of tactical infrastructure projects along the southwest border is on schedule, but meeting the SBI program office’s goal to have 370 miles of pedestrian fence and 300 miles of vehicle fence in place by December 31, 2008, will be challenging and the total cost is not yet known. As of February 21, 2008, the SBI program office reported that it had constructed 168 miles of pedestrian fence and 135 miles of vehicle fence. Although the deployment is on schedule, SBI program office officials reported that keeping on schedule will be challenging because of various factors, including difficulties in acquiring rights to border lands. In addition, SBI program office officials are unable to estimate the total cost of pedestrian and vehicle fencing because of various factors that are not yet known, such as the type of terrain where the fencing is to be constructed, the materials to be used, and the cost to acquire the land. Furthermore, as the SBI program office moves forward with tactical infrastructure construction, it is making modifications based on lessons learned from previous fencing efforts. For example, for future fencing projects, the SBI program office plans to buy construction items, such as steel, in bulk; use approved fence designs; and contract out the maintenance and repair of the tactical infrastructure. The SBI program office established a staffing goal of 470 employees for fiscal year 2008, made progress toward meeting this goal, and published its human capital plan in December 2007; however, the SBI program office is in the early stages of implementing this plan. As of February 1, 2008, SBI program office reported having 142 government staff and 163 contractor support staff for a total of 305 employees. SBI program office officials told us that they believe they will be able to meet their staffing goal of 470 staff by the end of September 2008. In December 2007, the SBI program office published the first version of its Strategic Human Capital Management Plan and is now in its early implementation phase. The plan outlines seven main goals for the office and activities to accomplish those goals, which align with federal government best practices. In addition to technological and infrastructure improvements along the border, the Border Patrol has experienced an unprecedented growth in the number of its agents. As we reported last year, in a little over 2 years, between fiscal year 2006 and December 2008, the Border Patrol plans to increase its number of agents by 6,000. This is nearly equivalent to the increase in the number of agents over the previous 10 years, from 1996 through 2006. As of September 30, 2007, CBP had 14,567 Border Patrol agents onboard. It plans to have 18,319 Border Patrol agents on board by the end of calendar year 2008. While Border Patrol officials are confident that the academy can accommodate the large influx of new trainees anticipated, they have expressed concerns over the sectors’ ability to provide sufficient field training. For example, officials are concerned with having a sufficient number of experienced agents available in the sectors to serve as field training officers and first-line supervisors. The large influx of new agents and the planned transfer of more experienced agents from the southwest border to the northern border could further exacerbate the already higher than desired agent-to-supervisor ratio in some southwest border sectors. Because citizens of other countries seeking to enter the United States on a temporary basis generally must apply for and obtain a nonimmigrant visa, the visa process is important to homeland security. While it is generally acknowledged that the visa process can never be entirely failsafe, the government has done a creditable job since September 11 in strengthening the visa process as a first line of defense to prevent entry into the country by terrorists. Before September 11, U.S. visa operations focused primarily on illegal immigration concerns—whether applicants sought to reside and work illegally in the country. Since the attacks, Congress, the State Department, and DHS have implemented several measures to strengthen the entire visa process as a tool to combat terrorism. New policies and programs have since been implemented to enhance visa security, improve applicant screening, provide counterterrorism training to consular officials who administer the visa process overseas, and help prevent the fraudulent use of visas for those seeking to gain entry to the country. The State Department also has taken steps to mitigate the potential for visa fraud at consular posts by deploying visa fraud investigators to U.S. embassies and consulates and conducting more in-depth analysis of the visa information collected by consulates to identify patterns that may indicate fraud, among other things. (Notably, 2 of the 19 terrorist hijackers on September 11th used passports that were manipulated in a fraudulent manner to obtain visas.) The Visa Waiver Program allows nationals from 27 countries to travel to the United States for 90 days or less for business and tourism purposes without first having to obtain a visa. The program’s purpose is to facilitate international travel for millions of people each year and promote the effective use of government resources. While valuable, the program can pose risks to U.S. security, law enforcement, and immigration interests because some foreign citizens may try to exploit the program to enter the United States. Effective oversight of the program entails balancing the benefits against the program’s potential risks. To find this balance, we reported in July 2006 that the U.S. government needs to fully identify the vulnerabilities posed by visa waiver travelers, and be in a position to mitigate them. In particular, we recommended that DHS provide the program’s oversight unit with additional resources to strengthen monitoring activities and improve DHS’s communication with U.S. officials overseas regarding security concerns of visa waiver countries. We also recommended that DHS communicate to visa waiver countries clear reporting requirements for lost and stolen passports and that the department implement a plan to make Interpol’s lost and stolen passport database automatically available during the primary inspection process at U.S. ports of entry. DHS is in the process of implementing these recommendations and we plan to report later this year on the department’s progress. Until recently, U.S. law required that a country may be considered for admission into the Visa Waiver Program if its nationals’ refusal rate for short-term business and tourism visas was less than 3 percent in the prior fiscal year. According to DHS, some of the countries seeking admission to the program are U.S. partners in the war in Iraq and have high expectations that they will join the program due to their close economic, political, and military ties to the United States. The executive branch has supported more flexible criteria for admission, and, in August 2007, Congress passed legislation that provides DHS with the authority to admit countries with refusal rates between 3 percent and 10 percent, if the countries meet certain conditions. For example, countries must meet all mandated Visa Waiver Program security requirements and cooperate with the United States on counterterrorism initiatives. Before DHS can exercise this new authority, the legislation also requires that the department complete certain actions aimed at enhancing security of the Visa Waiver Program. These actions include: Electronic Travel Authorization System: The August 2007 law requires that DHS certify that a “fully operational” electronic travel authorization (ETA) system is in place before expanding Visa Waiver Program to countries with refusal rates between 3 and 10 percent. This system would require nationals from visa waiver countries to provide the United States with biographical information before boarding a U.S.-bound flight to determine the eligibility of, and whether there exists a law enforcement or security risk in permitting, the foreign national to travel to the United States under the program. In calling for an ETA, members of Congress and the administration stated that this system was an important tool to help mitigate security risks in the Visa Waiver Program and its expansion. DHS has not yet announced when or how it will roll out the ETA system. The August 2007 law also required that, before DHS can admit countries with refusal rates between 3 percent and 10 percent to the Visa Waiver Program, DHS must certify that an air exit system is in place that can verify the departure of not less than 97 percent of foreign nationals who depart through U.S. airports. Last month, we testified that DHS’s plan to implement this provision had several weaknesses. Using this methodology, DHS stated that it can attain a match rate above 97 percent, based on August 2007 data, to certify compliance with the air exit system requirement in the legislation. On December 12, 2007, DHS reported to us that it will match records, reported by airlines, of visitors departing the country to the department’s existing records of any prior arrivals, immigration status changes, or prior departures from the United States. On February 21, 2008, DHS indicated that it had not finalized its decision on the methodology the department would use to certify compliance. Nevertheless, the department confirmed that the basic structure of its methodology would not change, and that it would use departure records as the starting point. Because DHS’s approach does not begin with arrival records to determine if those foreign nationals stayed in the United States beyond their authorized periods of admission, information from this system will not inform overall and country-specific overstay rates—key factors in determining illegal immigration risks in the Visa Waiver Program. The inability of the U.S. government to track the status of visitors in the country, to identify those who overstay their authorized period of visit, and to use these data to compute overstay rates have been longstanding weaknesses in the oversight of the Visa Waiver Program. We reported that DHS’s plan to meet the “97 percent” requirement in the visa waiver expansion legislation will not address these weaknesses. DHS has also has begun to pilot the Immigration Advisory Program (IAP), which is designed to provide additional scrutiny to passengers and their travel documents at foreign airports prior to their departure for the United States. This pilot program began in 2004 and was designed to identify and target potential high-risk passengers. Under the IAP pilot, CBP has assigned trained officers to foreign airports where they personally interview pre-identified high-risk passengers, conduct behavioral assessments, and evaluate the authenticity of travel documents prior to the passenger’s departure to the United States. The pilot program has been tested in several foreign airports, and CBP is negotiating with other countries to expand it elsewhere and to make certain IAP sites permanent. CBP has reported several successes through the IAP pilot. According to CBP documents, from the start of the IAP pilot in June 2004 through February 2006, IAP teams made more than 700 no-board recommendations for inadmissible passengers and intercepted approximately 70 fraudulent travel documents. CBP estimated that these accomplishments equate to about $1.1 million in cost avoidance for the U.S. government associated with detaining and removing passengers who would have been turned away after their flights landed, and $1.5 million in air carrier savings in avoided fines and passenger return costs. According to CBP, these monetary savings have defrayed the costs of implementing the program. In May 2007, we reported that CBP has not taken all of the steps necessary to fully learn from its pilot sites in order to determine whether the program should be made permanent and the number of sites that should exist. These steps are part of a risk management approach to developing and evaluating homeland security programs. A risk management framework includes such elements as formally outlining the goals of the program, setting measurable performance measures, and evaluating program effectiveness. Although CBP is currently taking steps to make its IAP sites permanent and to expand the program to other foreign locations, CBP has not finalized a strategic plan for the program that delineates program goals, objectives, constraints, and evaluative criteria. CBP officials told us that they have drafted a strategic plan for the IAP, which contains program goals and performance measures. CBP stated that the plan has not yet been finalized. CBP has made progress in taking actions to secure our nation’s borders. It has enhanced its ability to screen travelers before they arrive in the United States as well as once they arrive at a port of entry. Nevertheless, vulnerabilities still exist and additional actions are required to address them. How long it will take and how much it will cost are two questions that plague two of DHS’s major border security initiatives. Whether DHS can implement the exit portion of US-VISIT is uncertain. For land ports of entry, according to DHS, there is no near-term solution. Completing the SBI initiative within time and cost estimates will be challenging, including the building of nearly 700 miles of fencing. These issues underscore Congress’ need to stay closely attuned to DHS’s progress in these programs to help ensure performance, schedule, and cost estimates are achieved and the nation’s border security needs are fully addressed. This concludes my prepared testimony. I would be happy to respond to any questions that you or members of subcommittees may have. For questions regarding this testimony, please call Richard M. Stana at (202) 512-8777 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this statement were John Brummet, Assistant Director; Deborah Davis, Assistant Director; Michael Dino, Assistant Director; John Mortin, Assistant Director; Teresa Abruzzo; Richard Ascarate; Katherine Bernet; Jeanette Espinola; Adam Hoffman; and Bintou Njie. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since September 11, 2001, the need to secure U.S. borders has increased in importance and attracted greater public and Congressional attention. The Department of Homeland Security (DHS) has spent billions of dollars to prevent the illegal entry of individuals and contraband between ports of entry--government-designated locations where DHS inspects persons and goods to determine whether they may be lawfully admitted into the country. Yet, while DHS apprehends hundreds of thousands of such individuals each year, several hundreds of thousands more enter the country illegally and undetected. The U.S. Customs and Border Protection (CBP), a component of DHS, is the lead federal agency in charge of securing our nation's borders. This testimony summarizes GAO's work on DHS's efforts on selected border security operations and programs related to (1) inspecting travelers at U.S. ports of entry, (2) detecting individuals attempting to enter the country illegally between ports of entry, and (3) screening of international travelers before they arrive in the United States and challenges remaining in these areas. GAO's observations are based on products issued from May 2006 through February 2008. In prior reports, GAO has recommended various actions to DHS to, among other things, help address weaknesses in the traveler inspection programs and processes, and challenges in training officers to inspect travelers and documents. DHS has generally agreed with our recommendations and has taken various actions to address them. CBP has taken actions to improve traveler inspections at U.S. ports of entry, but challenges remain. First, CBP has stressed the importance of effective inspections and trained CBP supervisors and officers in interviewing travelers. Yet, weaknesses in travel inspection procedures and lack of physical infrastructure and staff have hampered CBP's ability to inspect travelers thoroughly and detect fraudulent documents. Second, CBP is implementing an initiative requiring citizens of the United States, Bermuda, Canada, and Mexico to present certain identification documents when entering the United States. As of December 2007, actions taken to meet the initiative's requirements include selecting technology to be used at land ports of entry and developing plans to train officers to use it. Finally, DHS has developed a program to collect, maintain, and share data on selected foreign nationals entering and exiting the country. As of October 2007, the agency has invested more than $1.5 billion on the program over 4 years and biometrically-enabled entry capabilities now operate at more than 300 ports of entry. However, though allocating about $250 million since 2003 to exit-related efforts, DHS has not yet detailed how it will verify when travelers exit the country. In November 2005, DHS announced the launch of a multiyear, multibillion-dollar program aimed at securing U.S. borders and reducing immigration of individuals who enter the United States illegally and undetected between ports of entry. One component of this program, which DHS accepted as complete in February 2008, was an effort to secure 28 miles along the southwest border using, among other means, improved cameras and radars. DHS plans to apply the lessons learned to future projects. Another program component, 370 miles of pedestrian fence and 300 miles of vehicle fence, has not yet been completed and DHS will be challenged to do so by its December 2008 deadline due to various factors, such as acquiring rights to border lands. Additionally, DHS is unable to estimate the total cost of this component because various factors are not yet known such as the type of terrain where the fencing is to be constructed. Finally, CBP has experienced unprecedented growth in the number of its Border Patrol agents. While initial training at the academy is being provided, Border Patrol officials expressed concerns about the agency's ability to provide sufficient field training. To screen international travelers before they arrive in the United States, the federal government has implemented new policies and programs, including enhancing visa security and providing counterterrorism training to overseas consular officials. As GAO previously recommended, DHS needs to better manage risks posed by a program that allows nationals from 27 countries to travel to the United States without a visa for certain durations and purposes. Regarding the prescreening of international passengers bound for the United States, CBP has a pilot program that provides additional scrutiny of passengers and their travel documents at foreign airports prior to their departure. CBP has reported several successes through the pilot but has not yet determined whether to make the program permanent.
The federal government levies excise taxes on entities and individuals for the purpose of financing general federal activities and specific government programs. Several different bureaus and offices within Treasury collected about $59 billion of excise taxes in fiscal year 1997. The Bureau of Alcohol, Tobacco, and Firearms accounted for about $13 billion in excise taxes on alcohol, tobacco products, and firearms while the U.S. Customs Service accounted for about $1 billion in excise taxes on imported and exported goods and services. However, the majority of excise taxes are accounted for by IRS. In fiscal year 1997, IRS collected about $45 billion in excise taxes on the purchase, use, or inventory of various types of goods or services, such as gasoline and airline tickets. The various excise taxes accounted for by IRS are deposited into the general fund of the Treasury and into nine different trust funds, which are administered by six agencies or federal entities. The trust funds that received fiscal year 1997 tax revenues are shown in table 1. A list of excise taxes by trust fund is included in appendix II. Administering agencies for the trust funds receiving excise tax revenue rely on the Treasury to accurately collect and distribute federal tax revenue to the appropriate trust funds. Because it collects federal tax revenue and then distributes it to government trust funds, Treasury is considered a servicing organization by agencies administering the trust funds as well as by the auditors of these agencies. Consequently, the administering agencies and their auditors need to rely on Treasury, through its various bureaus and offices, including IRS, to properly account for and distribute the amounts transferred from the government’s general fund to the applicable trust funds. Excise taxes are deposited into the general fund as received. However, the information that ultimately determines how these receipts are actually distributed is generally submitted via the Form 720, Quarterly Federal Excise Tax Return. Because data are not available to allocate excise taxes to the appropriate trust funds when deposits are made, Treasury uses a process to estimate the initial distribution of excise taxes. This process involves the use of economic models prepared by the Office of Tax Analysis (OTA) to estimate the initial distribution of tax receipts. Treasury’s Financial Management Service (FMS) uses these estimates to prepare entries for the initial distributions to the trust funds, which are recorded by the Bureau of the Public Debt (BPD) in the books and records of the trust funds maintained by Treasury. Subsequent to this initial distribution, IRS certifies quarterly the amounts that should have been distributed to the excise tax-related trust funds based on the tax returns. FMS uses these certifications to prepare adjustments to the initial trust fund distributions. These adjustments are recorded by BPD. There is typically a 6-month lag between the quarter end and the excise tax certification by IRS. Figure 1 provides an overview of the entire process of collecting, distributing, and certifying excise tax revenue reported to the trust funds. IRS relies on a combination of manual and automated procedures to prepare its certification of excise taxes to be distributed to the trust funds. IRS calculates the trust fund distributions based on assessment information in the master file. As quarterly excise tax returns are received, IRS personnel input the liability amounts by type of excise tax, such as Diesel Fuel Tax, into its master file. The tax types are identified by IRS numbers, or abstract numbers, which are preprinted on the Form 720. It is these abstract numbers that ultimately determine how amounts are distributed to the appropriate trust funds. The assessment information by type of excise tax is electronically transmitted from the master file to IRS’ Automated Quarterly Excise Tax Listing (AQETL) system. An IRS analyst, who has sole responsibility for preparing the excise tax certifications, accesses this system, analyzes the data for reasonableness by, for example, comparing current period assessments to amounts reported in prior periods, and makes adjustments, as necessary. The analyst may identify necessary adjustments by analyzing significant variations from prior quarter reported assessment amounts. After making any needed adjustments, the analyst generates a report from the AQETL system which summarizes the assessment data by excise tax type. The analyst uses this report to prepare the certifications for all tax distributions other than taxes related to the Highway, Airport & Airway, and Inland Waterways Trust Funds. For the Highway, Airport & Airway, and Inland Waterways Trust Funds, the analyst manually enters the assessment data from the report generated from the AQETL system onto electronic spreadsheets. These spreadsheets contain distribution rates to allocate the assessments between the trust funds and the general fund based on the assessment data entered by the analyst. The distributions from these spreadsheets, and the AQETL-system report for the other taxes, become the basis for preparing the quarterly excise tax certification letters. IRS submits the certification letters to FMS, which uses it to prepare adjustments to the initial distributions based on the OTA estimates to bring them in line with the IRS certified amounts. These adjustments are sent to BPD, which records the entries in the books and records of the trust funds maintained by Treasury. Figure 2 shows IRS’ process for certifying the trust fund distributions. Excise Tax Section (Cincinnati Service Center) Review Master File and AQETL data. Research apparent errors and correct data in Master File as necessary. Review assessment information, make any adjustments in AQETL, generate AQETL report. Enter assessment data for Highway Trust Fund into electronic spreadsheet. The objective of the agreed-upon procedures work was to assist the Inspectors General of the Department of Transportation and Department of Labor in ascertaining whether the net excise tax collections and excise tax certifications reported by IRS for the fiscal year ended September 30, 1997, were supported by the underlying records. The objectives of this report are to discuss the underlying internal control weaknesses that allowed errors identified in the agreed-upon procedures work to occur and to provide recommendations for correcting these weaknesses. See appendix I for a detailed discussion on the scope and methodology used to accomplish the objectives. We conducted our work primarily from October 1997 through February 1998, with some follow-up work through June 1998, in accordance with generally accepted government auditing standards. For the majority of excise taxes reported on the Form 720, taxpayers are required to provide the purchase, use, or inventory amounts of the goods or services (e.g., number of gallons of fuel) used in determining the tax assessment. The taxpayer multiplies these amounts against the preprinted tax rates on the Form 720 to report the excise tax assessment. Thus, information contained on the tax form allows IRS to mathematically verify liability amounts reported by the taxpayer. However, we found that IRS did not require its personnel to verify that the tax assessment amounts calculated by the taxpayers and reported on the returns agree with the supporting information provided on the tax returns. This led to inconsistencies between the assessed amount and supporting information provided by taxpayers, which IRS did not detect and correct. In 13 of the 230 taxpayer returns we reviewed, either assessment amounts we recalculated based on information contained in the return differed from the tax assessment reported on the return or all the required information was not included on the return to verify the assessment amount calculated by the taxpayer. IRS procedure manuals required that IRS personnel review tax returns that contain $1 million or more in excise tax assessments for reasonableness and accuracy. The manuals provided guidance for performing the reviews; however, this guidance was too general. As a result, the types of reviews performed by IRS analysts varied. In some cases, tax calculations were verified and taxpayers were contacted if data were missing, while in other cases, the return was only scanned for reasonableness. The lack of adequate and consistent review procedures increases the likelihood that incorrect assessment amounts reported by the taxpayer on the tax return would not be detected and corrected by IRS. As a result of our agreed upon procedures work, IRS officials indicated that IRS has acted to address the internal control weaknesses discussed above. Specifically, these officials indicated that IRS implemented procedures to improve the review of tax returns over $1 million. Also, IRS now requires the math verification of all tax assessments, as applicable, and analysts are required to follow-up with taxpayers to clarify inconsistent information on tax returns. We also noted that IRS centralized its excise tax processing in the Cincinnati Service Center to improve the consistency of processing and reviewing excise tax returns and to more closely monitor refund claims. Within that center, IRS established an Excise Program Section that specializes in reviewing excise tax returns and refund claims. It is significant that many of the errors we identified during our agreed upon procedures work related to tax returns processed at other service centers prior to IRS centralizing its excise tax processing. As discussed above, taxpayers report the majority of excise taxes to IRS quarterly using the Form 720. Taxpayers record on the Form 720 assessment amounts owed for each abstract number listed on the form. IRS uses the Form 720 to input assessment information into the master files. We found errors in this input process in fiscal year 1997. Specifically, we found that all or a portion of the assessment amounts for 13 of the 230 taxpayer returns reviewed were recorded in incorrect abstract numbers in the master file. In one case, IRS incorrectly recorded assessments of $176 million from the tax return in one abstract, yet the tax return indicated that this amount should have been divided among eight different abstracts. Because the abstract numbers identify the type of excise tax (for example, Diesel Fuel Tax) to which the assessment applies and are used in the certification of amounts ultimately distributed to the various trust funds, this directly affected the accuracy of IRS’ certifications. IRS officials indicated that these errors would be corrected in subsequent certifications made in fiscal year 1998. The structure of the Form 720 itself contributed to several errors. The Form 720 tax return is a complex tax form consisting of three distinct parts and two additional schedules. Information on the schedules includes details on excise tax assessments by semimonthly period (Schedule A), and adjustments to correct errors in previously filed Form 720s and claims against previously paid taxes (Schedule C). The information on Schedule C containing the claim and adjustment data is broken down by abstract number; however, it is aggregated into one total line on page 2 of the Form 720. Consequently, taxpayers record on the Form 720 assessment amounts owed for each abstract number listed on the form but do not reflect claims and adjustments, by abstract, on pages 1 and 2 of the Form 720. To assist in processing the tax return, IRS requires its staff to copy claims and adjustments listed on Schedule C, by abstract, to pages 1 and 2 of the Form 720. This procedure provides the data entry staff with the capability of inputting assessment, claim, and adjustment amounts, by abstract, directly off the first two pages of the tax return form without having to scan the schedules for claim and adjustment amounts to be input. However, the procedure of IRS staff manually copying claim and adjustment amounts from the schedules prepared by taxpayers increases the risk of errors, and consequently the likelihood that assessment, claim, and adjustment amounts will be incorrectly recorded in the master files. Nine of the 13 errors that we identified were the result of (1) IRS personnel incorrectly copying the adjustment information from the Schedule C to pages 1 and 2 of the tax return, (2) IRS personnel failing to copy adjustment information from the Schedule C to pages 1 and 2 of the tax return, or (3) data entry personnel misreading the handwritten adjustments made by other IRS staff on the Form 720 when inputting this information into the master files. For example, in one case, a taxpayer claimed a credit of $683,000, consisting of a $685,000 decrease for gasoline tax and a $2,000 increase for aviation fuel tax. However, IRS staff incorrectly recopied the credit amounts from the Schedule C to page 1 of the Form 720, resulting in the entire amount being recorded as gasoline tax. In another case, a taxpayer claimed a credit for $681,000 for taxed diesel fuel. An IRS employee copied the abstract number unclearly to page 1 of the Form 720, and the amount was erroneously recorded as a credit to tax on dyed diesel fuel used in trains. In total, in the 13 cases, we identified $179 million of IRS errors in inputting excise tax return information to the master files. The Comptroller General’s Standards for Internal Controls in the Federal Government specifies that transactions are to be promptly recorded and properly classified. The identified errors may have been avoided had procedures been in place to verify the input process. Also, errors resulting from the need for IRS staff to transfer information from the attached schedules to pages 1 and 2 of the Form 720 for each abstract could be avoided by revising the tax return form so that taxpayers, and not IRS personnel, enter the claim and adjustment amounts by abstract from Schedule C to pages 1 and 2 of the tax return. As discussed previously, one analyst is responsible for compiling the quarterly certifications. This involves accessing quarterly the assessment information from the AQETL system, analyzing and adjusting these data as necessary and, for the Highway Trust Fund, inputting these data into an electronic spreadsheet, provided by OTA, to derive the quarterly certifications. We found that there is no supervisory review of the analyst’s work until the certification letters are prepared, at which point they are forwarded to the Branch Chief for a high-level review and signature. We found no evidence that a detailed supervisory review is performed of the documentation supporting the certifications at any point during the certification process. Finally, we found that IRS does not review the distribution rates contained on the OTA-provided spreadsheet used to allocate certain assessments between the general fund and the Highway Trust Fund. The absence of such reviews was a factor in not detecting numerous errors in the certifications performed in fiscal year 1997 with respect to the Highway Trust Fund and the general fund. IRS’ AQETL system contains the assessment data electronically transmitted from the master file. Because it is not integrated with the electronic spreadsheet used to prepare the certifications for the Highway Trust Fund, manual data entry is necessary to accomplish the calculations and summarize the information. This information is a basis for preparing the certifications. Without adequate supervisory review of these tasks, all of which are performed by one individual, there is a high risk that errors will be made and not detected and corrected. The Comptroller General’s Standards for Internal Controls in the Federal Government specifies that qualified and continuous supervision is to be provided to ensure that internal control objectives are achieved. The lack of adequate supervisory review can lead to incorrect certifications and inaccurate distributions to the trust funds. We found a number of such errors that occurred in fiscal year 1997. For example, we found that assessment amounts were (1) inadvertently omitted from the certifications and (2) did not agree with supporting documentation. In one case related to heavy vehicle use tax, the supporting schedule summarizing the tax return information reflected an assessment amount of $195 million but the amount certified was $128 million. As a result, the certified amount for the Highway Trust Fund was understated by $67 million. In another case, assessments for compressed natural gas totaling over $500,000 were omitted from the Highway Trust Fund certification. IRS officials indicated that both of these errors were corrected in a subsequent certification that was made in fiscal year 1998. However, proper supervisory review of the analyst’s work would likely have detected these errors and prevented these inaccurate distributions. IRS does not have procedures for verifying the accuracy of distribution rates contained on the electronic spreadsheet provided by OTA. These rates, many of which are based on complex formulas derived from provisions of laws, are used to allocate assessments between the general fund and the Highway Trust Fund. The lack of IRS review of the distribution rates on this spreadsheet resulted in errors in the excise tax certifications for the Highway Trust Fund going undetected. For example, we found the following problems in the electronic spreadsheet provided by OTA: incorrect application rates to allocate gasohol taxes, which resulted in an overstatement to the Highway Trust Fund and a corresponding understatement to the general fund of $89,000; misapplied application rates between the Highway Account and Mass Transit Account for diesel fuel inventory in the certifications for the quarters ending December 1996 and March 1997, which resulted in a net understatement of the Highway Account and a corresponding net overstatement of the Mass Transit Account of $19,000; and missing distribution rate formulas from the spreadsheet, which resulted in tax assessment amounts of $1,000 and $7,000 being excluded from the Highway Trust Fund certification. An IRS review of the distribution rates contained on the spreadsheet could have identified these problems and prevented the distribution errors. The errors we found in the review of the fiscal year 1997 excise tax certification process are the direct result of weaknesses in fundamental internal controls, specifically the lack of appropriate verification and review procedures, at all critical points in the excise tax certification process. These weaknesses led to taxpayer, IRS, and OTA errors going undetected and directly resulted in inaccurate distributions of excise tax revenue to the trust funds in fiscal year 1997. To strengthen internal controls over IRS’ process of inputting tax return information into the master file, we recommend that IRS: Determine if it would be cost effective to develop and implement procedures requiring either key verification of the assessment amount by excise tax type before final processing or to implement other post-input controls to verify the accuracy of assessment amounts by excise tax type on the master file. In making this determination, IRS should consider establishing a dollar threshold that would ensure coverage of 90 percent of total excise tax assessments from the tax returns. Revise the Form 720 tax return to reflect a separate column adjacent to the column for entering the tax assessment, by abstract number, for the taxpayer to report on pages 1 and 2 of the tax return claims and adjustments, by abstract number, based on the information the taxpayer reports on Schedule C. To strengthen internal controls over IRS’ process of certifying excise tax distributions to the general fund and federal trust funds, we recommend that IRS: Develop, document, and implement review procedures over the adjustment and summarization of assessment data used in the certifications. Specifically, IRS should require detailed supervisory review be performed and documented to ensure that adjustments are reasonable and adequately supported, calculations are appropriately performed, and the certification letter agrees with the supporting schedules. IRS recently changed its procedures to certify excise taxes based on estimated collections. Despite this change, review procedures are still necessary. Establish and implement specific procedures requiring that IRS personnel review the distribution rates provided by OTA prior to those rates being used in the certification of Highway Trust Fund distributions and document evidence of these reviews. In commenting on this report, the IRS Commissioner stated that overall he agreed with our findings and recommendations. The Commissioner noted actions either planned or already in process or implemented to address most of the issues raised in this report. These include (1) implementing post-input controls to include a 100 percent review of all returns with tax assessments of $1 million or more, (2) developing review procedures over the adjustment and summarization of collection data used in the certifications, including supervisory reviews prior to final certification, and (3) reviewing, as part of a recently-formed Intra-Treasury Working Group, distribution rate charts provided by OTA prior to using these rates in the certification of Highway Trust Fund distributions. However, the Commissioner disagreed with our recommendation to revise the Form 720 tax return to require taxpayers to report claims and adjustments information on pages 1 and 2 of the tax return form. He expressed concern with how the draft report characterized the tax return form and the accompanying Schedules A and C of the form. Additionally, he noted it would be inappropriate to require the taxpayer to net the tax liability by the claim and adjustment amounts reported on the accompanying Schedule C. We have modified the report to more appropriately reflect the nature of the Form 720 and its accompanying schedules. Consistent with these changes, we modified the recommendation to eliminate the reference to having the taxpayer net the tax liability, by abstract number, for any adjustments or claims, by abstract number, as reported on the accompanying Schedule C. However, we believe that revisions to the tax return form are needed because of the frequency of errors made by IRS in either copying claim and adjustment information from Schedule C to pages 1 and 2 of the tax return or in inputting information copied from the tax return to the master files. Specifically, the Form 720 tax return should be revised to reflect a separate column in which the taxpayer would report claims and adjustments from the Schedule C, by abstract number, adjacent to the column reflecting the tax assessment, by abstract number, on pages 1 and 2 of the Form 720. The complete text of the IRS Commissioner’s response to our draft report is presented in appendix IV. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight within 60 days after the date of this letter. A written statement also must be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made over 60 days after the date of this letter. We are sending copies of this report to Director of the Office of Management and Budget, the Secretary of the Treasury, the Secretary of Transportation, the Secretary of Labor, and the Inspectors’ General of the Department of Transportation and Department of Labor. Copies of this letter will be made available to others upon request. If you have any questions, please call me at (202) 512-9505 or Steven J. Sebastian, Assistant Director, at (202) 512-9521. The objective of the agreed-upon procedures work was to assist the Inspectors General of the Department of Transportation and Department of Labor in ascertaining whether the net excise tax collections and excise tax certifications reported by IRS for the fiscal year ended September 30, 1997, were supported by the underlying records. We did not perform work on excise taxes collected by other Treasury bureaus, such as the Customs Service and the Bureau of Alcohol, Tobacco, and Firearms. We did include in our review the Federal Aid to Wildlife Restoration Fund because IRS uses different procedures to certify this trust fund. In performing the agreed-upon procedures, we gained an understanding of the internal controls over the excise tax collection and certification process. The objectives of this report were to discuss the underlying internal control weaknesses that allowed errors identified in the agreed upon procedures work to occur and to provide recommendations for correcting these internal control weaknesses. To accomplish our objectives, we examined, on a test basis, evidence supporting the net excise tax collection amounts reported on the fiscal year 1997 Custodial Financial Statements; specifically, we used Dollar Unit Sampling to select a sample of 396 combined excise tax collection and refund transactions from the master file for the first 9 months of fiscal year 1997, using a confidence level of 80 percent, a test materiality of $400 million, and an expected error amount of $200 million. Of this total, 390 transactions represented collections and six transactions represented refunds; verified sampled excise tax transactions to source documents to determine if the transactions were accurately recorded, posted to the proper tax class, and reported in the appropriate period; performed a predictive test of excise tax revenue collections for the final 3 months of the fiscal year to determine if reported fiscal year 1997 revenue appears consistent and reasonable; reviewed IRS’ revenue receipts and refund reconciliations between its records and Treasury for fiscal year 1997, to determine whether year-end excise tax collection balances from the general ledger materially agree with IRS’ master files and Treasury records; and obtained an understanding of internal controls related to safeguarding assets, compliance with laws and regulations, and financial reporting. In addition, to assess the reliability of key data inputs and assumptions used in the excise tax certification, we: Recalculated the excise tax assessments on the 230 tax returns associated with the sample of 390 excise tax collections based on the information provided on the returns (e.g., number of gallons of fuel multiplied by the tax rate equals the assessed tax). We reviewed only 230 returns because in some instances more than one receipt transaction related to the same return. Because the sample was selected based on excise tax collections, we were not able to project any errors identified on the corresponding tax assessment amounts. Verified that the excise tax assessment amounts by abstract number on the 230 tax returns were accurately recorded in the IRS master file and in the AQETL report. Determined if the rates used to allocate assessments between selected trust funds and the general fund for the final quarter of fiscal year 1997 were adequately supported. verified the mathematical accuracy for selected excise tax certifications and traced, on a selected basis, excise tax certifications to supporting schedules. We conducted our work primarily from October 1997 through February 1998, with some follow-up work through June 1998, in accordance with generally accepted government auditing standards. Ticket tax Facilities use Air freight Aviation gasoline Aviation fuel (other than gasoline) Aviation fuel (other than gasoline) for use in commercial aviation Aviation fuel (floor stocks) Aviation gasoline (floor stocks) The following are GAO’s comments on the Internal Revenue Service’s letter dated September 25, 1998. 1. The technical comments from the Chief Counsel have been incorporated as appropriate, but the enclosure has not been included in this appendix. 2. Discussed in “Agency Comments and Our Evaluation” section. Charles Payton, Assistant Director Barbara House, Senior Evaluator Ted Hu, Senior Auditor Eric Johns, Senior Auditor Stacey Osborn, Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO conducted a follow-up review of the Internal Revenue Service's (IRS) controls over its process for certifying excise taxes for distribution to the federal trust funds. GAO noted that: (1) IRS does not have adequate controls over its process for certifying excise taxes for distribution to the federal government trust funds; (2) the lack of fundamental internal controls resulted in errors in the certifications going undetected; (3) these errors ultimately affected the amounts distributed to the trust funds during fiscal year 1997; (4) IRS' ineffective controls over the certification process resulted in undetected: (a) mistakes by taxpayers in preparing excise tax returns; (b) input errors by IRS when entering excise tax return information in its master files; and (c) errors by IRS in preparing the excise tax certifications; (5) as a result of these errors, trust funds did not receive the appropriate amount of excise tax revenue; (6) these errors are particularly important to the Highway Trust Fund, which receives over half of the excise taxes that are accounted for by IRS; (7) these weaknesses were a contributing factor in the Department of Transportation's (DOT) Inspectors General's: (a) qualified opinion on the Highway Trust Fund financial statements; (b) disclaimer of opinion on the Federal Aviation Administration's financial statements; and (c) disclaimer of opinion on DOT's consolidated financial statements; (8) the errors GAO found relating to taxpayer mistakes, IRS data input, and certification preparation could have been detected or prevented by effective IRS procedures; and (9) IRS has taken some actions to improve certain controls over the excise tax certification process.
The Aviation and Transportation Security Act (ATSA), enacted in November 2001, created TSA and gave it responsibility for securing all modes of transportation. TSA’s aviation security mission includes strengthening the security of airport perimeters and restricted airport areas; hiring and training a screening workforce; prescreening passengers against terrorist watch lists; and screening passengers, baggage, and cargo at the over 400 commercial airports nationwide, among other responsibilities. While TSA has operational responsibility for physically screening passengers and their baggage at most airports, TSA exercises regulatory, or oversight, responsibility for the security of airports and air cargo. Specifically, airports, air carriers, and other entities are required to implement security measures in accordance with TSA security requirements, against which TSA evaluates their compliance efforts. TSA also oversees air carriers’ efforts to prescreen passengers—in general, the matching of passenger information against terrorist watch lists prior to an aircraft’s departure—and plans to take over operational responsibility for this function with the implementation of its Secure Flight program. CBP, which currently has responsibility for prescreening airline passengers on international flights departing from and bound for the United States, will continue to perform this function until TSA assumes this function under Secure Flight. DHS’s S&T is responsible for researching and developing technologies to secure the transportation sector. TSA shares responsibility for securing surface transportation modes with federal, state, and local governments and the private sector. TSA’s security mission includes establishing security standards and conducting assessments and inspections of surface transportation modes, including passenger and freight rail; mass transit; highways and commercial vehicles; and pipelines. The Federal Emergency Management Agency’s Grant Programs Directorate provides grant funding to surface transportation operators and state and local governments, and in conjunction with certain grants, the National Protection and Programs Directorate conducts risk assessments of surface transportation facilities. Within the Department of Transportation (DOT), the Federal Transit Administration (FTA) and Federal Railroad Administration (FRA) have responsibilities for passenger rail safety and security. In addition, public and private sector transportation operators are responsible for implementing security measures for their systems. DHS, primarily through TSA, has undertaken numerous initiatives to strengthen the security of the nation’s aviation and surface transportation systems. In large part, these efforts have been guided by legislative mandates designed to strengthen the security of commercial aviation following the September 11, 2001, terrorist attacks. These efforts have also been affected by events external to the department, including the alleged August 2006 terrorist plot to blow up commercial aircraft bound from London to the United States, and the 2004 Madrid and 2005 London train bombings. While progress has been made in many areas with respect to securing the transportation network, we found that the department can strengthen its efforts in some key areas outlined by Congress, the administration, and the department itself, as discussed below. Airport Perimeter Security and Access Controls. TSA has taken action to strengthen the security of airport perimeters and access to restricted airport areas. However, as we reported in June 2004, the agency can further strengthen its efforts to evaluate the effectiveness of security- related technologies and reduce the risks posed by airport employees, among other things. In 2006, TSA completed the last project in an access control pilot program that included 20 airports, and which was designed to test and evaluate new and emerging technologies in an airport setting. TSA is also conducting an airport perimeter security pilot at six airports, to test technologies such as vehicle inspection systems. However, TSA has not developed a plan to guide and support individual airports and the commercial airport system as a whole with respect to future technology enhancements for perimeter security and access controls. Without such a plan, TSA could be limited in assessing and improving the effectiveness of its efforts to provide technical support for enhancing security. In addition, we reported in September 2006 and October 2007 on the status of the development and testing of the Transportation Worker Identification Credential program—DHS’s effort to develop biometric access control systems to verify the identity of individuals accessing secure transportation areas. However, DHS has not yet determined how and when it will implement a biometric identification system for access controls at commercial airports. In June 2004, we reported that while background checks were not required for all airport workers, TSA required most airport workers who perform duties in selected areas to undergo a fingerprint-based criminal history records check. TSA further required airport operators to compare applicants’ names against TSA’s security watch lists. In July 2004, consistent with our previous recommendation to determine the need for additional security requirements to reduce the risks posed by airport employees, TSA enhanced requirements for background checks for employees working in restricted airport areas. Also consistent with our recommendation, in 2007, TSA further expanded the Security Threat Assessment—which determines, among other things, whether an employee has any terrorist affiliations—to require airport employees who receive an airport-issued identification badge to undergo a review of citizenship status. Further, in March 2007, TSA implemented a random employee screening initiative— the Aviation Direct Access Screening Program—that uses TSOs to randomly screen airport workers and their property for explosives and other threat items. TSA has allocated about 900 full-time equivalent positions to the program and has requested $36 million for FY 2009 for an additional 750 full-time equivalent positions. As directed by Congress in 2008, TSA plans to pilot test various employee screening methods at seven selected airports, including conducting 100 percent employee screening at three of these airports. TSA plans to begin pilot testing in May and report on the results of its efforts—as directed—by September 1, 2008. Finally, consistent with our previous recommendation to develop schedules and an analytical approach for completing vulnerability assessments, TSA has developed criteria for prioritizing vulnerability assessments at commercial airports. However, it has not compiled national baseline data to fully assess security vulnerabilities across airports. In 2004, TSA said an analysis of vulnerabilities on a nationwide basis was essential since it would allow the agency to assess the adequacy of security policies and help better direct limited resources. GAO is currently reviewing TSA’s efforts to enhance airport perimeter and access control security and will report on our results later this year. Aviation Security Workforce. TSA has made progress in deploying, training, and assessing the performance of its federal aviation security workforce. For example, TSA has hired and deployed a federal screening workforce at over 400 commercial airports nationwide, and developed standards for determining TSO staffing levels at airports. These standards form the basis of TSA’s Staffing Allocation Model, which the agency uses to determine TSO staffing levels at airports. In response to our recommendation, in December 2007 TSA developed a Staffing Allocation Model Rates and Assumptions Validation Plan that identifies the process the agency plans to use to review and validate the model’s assumptions on a periodic basis. TSA also established numerous programs to train and test the performance of its screening workforce. Among other efforts, TSA has provided enhanced explosives-detection training, and recently reported developing a monthly recurrent (ongoing) training plan for all TSOs. In addition, TSA has trained and deployed federal air marshals on high-risk flights; established standards for training flight and cabin crews; and established a Federal Flight Deck Officer program to select, train, and allow authorized flight deck officers to use firearms to defend against any terrorist or criminal acts. In April 2006, TSA implemented a performance accountability and standards system to assess agency personnel at all levels on various competencies, including training and development, readiness for duty, management skills, and technical proficiency. Finally, in April 2007, TSA redesigned its local covert testing program conducted at individual airports. This new program, known as the Aviation Screening Assessment Program or ASAP, is intended to test the performance of the passenger and checked baggage screening systems, to include the TSO workforce. During our ongoing review of TSA’s covert testing program, we identified that TSA has implemented risk-based national and local covert testing programs to identify vulnerabilities in and measure the performance of selected aspects of the aviation system. However, we found that TSA could strengthen its program by developing a more systematic process for (1) recording the causes of covert test failures, and (2) evaluating the test results and developing approaches for mitigating vulnerabilities identified in the commercial aviation security system. We will report on the complete results of this review later this year. Passenger Prescreening. Over the past several years, TSA has faced a number of challenges in developing and implementing an advanced prescreening system, known as Secure Flight, which will allow TSA to assume responsibility from air carriers for comparing domestic passenger information against the No Fly List and Selectee List. In February 2008, we reported that TSA had made substantial progress in instilling more discipline and rigor into Secure Flight’s development and implementation, including preparing key systems development documentation and strengthening privacy protections. However, challenges remain that may hinder the program’s progress moving forward. Specifically, TSA had not (1) developed program cost and schedule estimates consistent with best practices; (2) fully implemented its risk management plan; (3) planned for system end-to-end testing in test plans; and (4) ensured that information- security requirements are fully implemented. To address these challenges, we made several recommendations to DHS and TSA to incorporate best practices in Secure Flight’s cost and schedule estimates and to fully implement the program’s risk-management, testing, and information- security requirements. DHS and TSA officials generally agreed with these recommendations. We are continuing to assess TSA’s efforts in developing and implementing Secure Flight—which, according to TSA’s planned schedule, will allow the agency to fully assume the watch list matching function from air carriers in fiscal year 2010. TSA has also taken steps to integrate the domestic watch-list matching function with the international watch-list matching function currently operated by CBP, consistent with our past recommendations. Specifically, TSA and CBP have coordinated to develop a strategy called the One DHS Solution, which is to align the two agencies’ domestic and international watch-list matching processes, information technology systems, and regulatory procedures to provide a seamless interface between DHS and the airline industry. TSA and CBP also agreed that TSA will take over the screening of passengers against the watch list for international flights from CBP, though CBP will continue to match passenger information to the watch list in fulfillment of its border-related functions. Full implementation of an integrated system is not planned to take place until after Secure Flight acquires the watch-list matching function for domestic flights. Checkpoint Screening. TSA has taken steps to strengthen passenger checkpoint screening procedures to enhance the detection of prohibited items and strengthen security; however, TSA could improve its evaluation and documentation of proposed procedures. In April 2007, we reported that modifications to checkpoint screening standard operating procedures (SOP) were proposed based on the professional judgment of TSA senior- level officials and program-level staff, as well as threat information and the results of covert testing. We also reported on steps TSA had taken to address new and emerging threats, such as establishing the Screening Passengers by Observation Technique (SPOT) program, which provides TSOs with a nonintrusive, behavior-based means of identifying potentially high-risk individuals. For proposed screening modifications deemed significant, such as SPOT, TSA operationally tested these proposed modifications at selected airports before determining whether they should be implemented nationwide. However, we reported that TSA’s data collection and analysis of proposed SOP modifications could be improved, and recommended that TSA develop sound evaluation methods, when possible, to assess whether proposed screening changes would achieve their intended purpose. TSA has since reported taking steps to work with subject-matter experts to ensure that the agency’s operational testing of proposed screening modifications are well designed and executed, and produce results that are scientifically valid and reliable. With regard to checkpoint screening technologies, TSA and S&T have researched, developed, tested, and initiated procurements of various technologies to address security vulnerabilities that may be exploited; however, limited progress has been made in fielding emerging technologies. For example, of the various emerging checkpoint screening projects funded by TSA and S&T, only the explosives trace portal and a bottled liquids scanning device have been deployed for use in day-to-day operations. However, due to performance and maintenance issues, TSA halted the acquisition and deployment of the portals in June 2006. Also, in February 2008, we testified that TSA lacked a strategic plan to guide its efforts to acquire and deploy screening technologies, which could limit its ability to deploy emerging technologies to airports deemed at highest risk. According to TSA officials, the agency plans to submit a strategic plan to Congress by June 2008. We have ongoing work reviewing S&T and TSA checkpoint screening technologies efforts and will report on our results later this year. Checked Baggage Screening. TSA has made progress in installing explosive detection systems to provide the capability to screen checked baggage at the nation’s commercial airports, as mandated by law. From November 2001 through June 2006, TSA procured and installed about 1,600 Explosive Detection Systems (EDS) and about 7,200 Explosive Trace Detection (ETD) machines to screen checked baggage for explosives at over 400 commercial airports. In addition, based in part on recommendations we made, TSA moved stand-alone EDS machines that were located at airports that received new in-line EDS baggage screening systems to 32 airports that did not previously have them from May 2004 through December 2007. TSA also replaced ETD machines at 53 airports with 158 new EDS machines from March 2005 through December 2007. In response to mandates to field the equipment quickly and to account for limitations in airport design that made it difficult to quickly install in-line EDS systems, TSA generally placed baggage screening equipment in a stand-alone mode—usually in airport lobbies—to conduct the primary screening of checked baggage for explosives. Based, in part, on our recommendations, TSA later developed a plan to integrate EDS and ETD machines in-line with airport baggage conveyor systems. The installation of in-line systems can result in considerable savings to TSA through the reduction of personnel needed to operate the equipment, as well as increased security. In addition, according to TSA estimates, the number of checked bags screened per hour can more than double when EDS machines are placed in-line versus being placed in the stand alone mode. Despite delays in the widespread deployment of in-line systems due to the high upfront capital investment required, TSA is pursuing the installation of these systems and is seeking creative financing solutions to fund their deployment. However, It is incumbent upon airports of whether or not they will pursue the installation of in-line baggage systems. In February 2008, TSA submitted a legislative proposal to increase the Aviation Security Capital Fund (ASCF) through a new surcharge on the passenger security fee. According to TSA, this proposal, if adopted, would accelerate the deployment of optimal checked baggage screening systems and address the need to re-capitalize existing equipment deployed immediately after September, 2001. The Implementing Recommendations of the 9/11 Commission Act reiterates a requirement that DHS submit a cost-sharing study for the installation of in-line baggage screening systems, along with a plan and schedule for implementing provisions of the study, and requires TSA to establish a prioritization schedule for airport improvement projects related to the installation of in-line or other optimal baggage screening systems. As of April 3, 2008, TSA had not completed the prioritization schedule, corresponding timeline, and description of the funding allocation for these projects. Security: Enhancements Made in Passenger and Checked Baggage Screening, but Challenges Remain, GAO-06-371T (Washington, D.C.: April 4, 2006), and GAO-07-448T. inbound air cargo including defining TSA’s and CBP’s inbound air cargo security responsibilities. CBP subsequently issued its International Air Cargo Security strategic plan in June 2007, and TSA plans to revise its Air Cargo strategic plan during the third quarter of fiscal year 2008 to incorporate a strategy for addressing inbound air cargo security, including how the agency will partner with CBP. We also reported that TSA had not conducted vulnerability assessments to identify the range of air cargo security weaknesses that could be exploited by terrorists, and recommended that TSA develop a methodology and schedule for completing these assessments. In response in part to our recommendation, TSA implemented an Air Cargo Vulnerability Assessment program in November 2006 and, as of April 2008, had completed vulnerability assessments at five domestic airports. TSA plans to complete assessments of all high-risk airports by 2009. In addition, although TSA has established requirements for air carriers to randomly screen air cargo, the agency had exempted some domestic and inbound cargo from these requirements. While TSA has since revised its screening exemptions for domestic air cargo, it has not done so for inbound air cargo. TSA is also working with DHS S&T to develop and pilot test a number of technologies to assess their applicability to screening and securing air cargo. However, as of February 2008, TSA had provided a completion date for only one of its five air cargo technology pilot programs. According to TSA officials, the agency will determine whether it will require the use of these technologies once it has completed its assessments and analyzed the results. We also reported in April 2007 that TSA did not systematically compile and analyze information on air cargo security practices used abroad to identify those that may strengthen the department’s overall air cargo security program, and we recommended that it do so. TSA has since begun development of a certified cargo screening program based in part on its review of screening models used in two foreign countries that rely on government-certified screeners to screen air cargo early in the supply chain. According to TSA, the agency plans to deploy this program to assist it in meeting the statutory requirement to screen 100 percent of air cargo transported on passenger aircraft by August 2010 (and to screen 50 percent of such cargo by February 2009), as mandated by the Implementing Recommendations of the 9/11 Commission Act. In January 2008, TSA began phase one of the program’s pilot tests, and as of April 2008, had completed tests at six airports. TSA plans to conduct tests at three additional airports by June 2008. Strategic Approach for Implementing Security Functions. In September 2005, DHS completed the National Strategy for Transportation Security. This strategy identified and evaluated transportation assets in the United States that could be at risk of a terrorist attack and addressed transportation sector security needs. Further, in May 2007, DHS issued a strategic plan for securing the transportation sector and supporting annexes for each of the surface transportation modes, and reported taking actions to adopt the strategic approach outlined by the plan. The Transportation Systems Sector-Specific Plan describes the security framework that is intended to enable sector stakeholders to make effective and appropriate risk-based security and resource allocation decisions within the transportation network. TSA has begun to implement some of the security initiatives outlined in the sector-specific plan and supporting modal plans. Additionally, the Implementing Recommendations of the 9/11Commission Act imposes a deadline of May 2008, for the Secretary of DHS to develop and implement the National Strategy for Public Transportation Security. Our work assessing DHS’s efforts in implementing its strategy for securing surface transportation modes is being conducted as part of our ongoing reviews of mass transit, passenger and freight rail, commercial vehicle, and highway infrastructure security. We will report on the results of this work later this year. Threat, Criticality, and Vulnerability Assessments. TSA has taken actions to assess risk by conducting threat, criticality, and vulnerability assessments of surface transportation assets, particularly for mass transit, passenger rail, and freight rail, but its efforts related to commercial vehicles and highway infrastructure are in the early stages. For example, TSA had conducted threat assessments of all surface modes of transportation. TSA has also conducted assessments of the vulnerabilities associated with some surface transportation assets. For example, regarding freight rail, TSA has conducted vulnerability assessments of rail corridors in eight High Threat Urban Areas where toxic-inhalation-hazard shipments are transported. With respect to commercial vehicles and highway infrastructure, TSA’s vulnerability assessment efforts are ongoing. According to TSA, the agency performed 113 corporate security reviews on highway transportation organizations through fiscal year 2007, such as trucking companies, state Departments of Transportation, and motor coach companies. However, TSA does not have a plan or a time frame for conducting these reviews on a nationwide basis. Furthermore, DHS’s National Protection and Programs Directorate’s Office of Infrastructure Protection conducts vulnerability assessments of surface transportation assets to identify protective measures to reduce or mitigate asset vulnerability. With regard to criticality assessments, TSA reported in April 2008 that the agency had conducted 1,345 assessments of passenger rail stations. Additionally, the Implementing Recommendations of the 9/11Commission Act has several provisions related to security assessments. For instance, the act requires DHS to review existing security assessments for public transportation systems as well as conduct additional assessments as necessary to ensure that all high-risk public transportation agencies have security assessments. Moreover, the act also requires DHS to establish a federal task force to complete a nationwide risk assessment of a terrorist attack on rail carriers. We will continue to review threat, vulnerability, and criticality assessments conducted by TSA related to securing surface modes of transportation during our ongoing work. Issuance of Security Standards. TSA has taken actions to develop and issue security standards for mass transit, passenger rail, and freight rail transportation modes. However, TSA has not yet developed or issued security standards for all surface transportation modes, such as commercial vehicle and highway infrastructure, or determined whether standards are necessary for these modes of transportation. Specifically, TSA has developed and issued both mandatory rail security directives and recommended voluntary best practices—known as Security Action Items—for transit agencies and passenger rail operators to implement as part of their security programs to enhance both security and emergency- management preparedness. TSA also issued a notice of proposed rule making in December 2006, which if finalized as proposed, would include additional security requirements for passenger and freight rail transportation operators. For example, the rule would include additional security requirements designed to ensure that freight railroads have protocols for the secure custody transfers of toxic-inhalation-hazard rail cars in High Threat Urban Areas. DHS and other federal partners have also been collaborating with the American Public Transportation Association (APTA) and public and private security professionals to develop industry wide security standards for mass transit systems. APTA officials reported that they expect several of the voluntary standards to be released in mid- 2008. Additionally, the Implementing Recommendations of the 9/11Commission Act requires DHS to issue regulations establishing standards and guidelines for developing and implementing vulnerability assessments and security plans for high-risk railroad carriers and over-the- road bus operators. The deadlines for the regulations are August 2008 and February 2009, respectively. With respect to freight rail, TSA is developing a notice of proposed rulemaking proposing that high-risk rail carriers conduct vulnerability assessments and develop and implement security plans. We will continue to assess TSA’s efforts to issue security standards for other surface transportation modes during our ongoing reviews. Compliance Inspections. TSA has hired and deployed surface transportation security inspectors who conduct compliance inspections for both passenger and freight rail modes of transportation; however, questions exist regarding how TSA will employ the inspectors to enforce new regulations proposed in its December 2006 Notice of Proposed Rulemaking and regulations to be developed in accordance with the Implementing Recommendations of the 9/11 Commission Act. TSA officials reported having 100 surface transportation inspectors during fiscal year 2005 and, as of December 2007, were maintaining an inspector workforce of about the same number. The agency’s budget request for fiscal year 2009 includes $11.6 million to fund 100 surface transportation security inspectors—which would maintain its current staffing level. Inspectors’ responsibilities include conducting on-site inspections of key facilities for freight rail, passenger rail, and transit systems; assessing transit systems’ implementation of core transit security fundamentals and comprehensive security action items; conducting examinations of stakeholder operations, including compliance with security directives; identifying security gaps; and developing effective practices. To meet these compliance responsibilities, TSA reported in December 2007 that it had conducted voluntary assessments of 50 of the 100 largest transit agencies, including 34 passenger rail and 16 bus-only agencies, and has plans to continue these assessments with the next 50 largest transit agencies during fiscal year 2008. With respect to freight rail, TSA reported visiting, during 2007, almost 300 railroad facilities including terminal and railroad yards to assess the railroads’ implementation of 17 DHS- recommended Security Action Items associated with the transportation of toxic-inhalation-hazard materials. TSA has raised concerns about the agency’s ability to continue to meet anticipated inspection responsibilities given the new regulations proposed in its December 2006 Notice of Proposed Rulemaking and requirements of the Implementing Recommendations of the 9/11 Commission Act. For example, the act mandates that high-risk over-the-road bus operators, railroad carriers, and public transportation agencies develop and implement security plans which must include, among other requirements, procedures to be implemented in response to a terrorist attack. The act further requires the Secretary of DHS to review each plan within 6 months of receiving it. TSA officials stated that they believe TSA inspectors will likely be tasked to conduct these reviews. The act also requires that the Secretary of DHS develop and issue interim final regulations by November 2007, for a public transportation security training program. As of April 2008, these interim regulations have not been issued. According to TSA officials, TSA inspectors will likely be involved in ensuring compliance with these regulations as well. To help address these additional requirements, the Implementing Recommendations of the 9/11Commission Act authorizes funds to be appropriated for TSA to employ additional surface transportation inspectors, and requires that surface transportation inspectors have relevant transportation experience and appropriate security and inspection qualifications. However, it is not clear how TSA will meet these new requirements since the agency has not requested funding for additional surface transportation security inspectors for fiscal year 2009. We will continue to assess TSA’s inspection efforts during our ongoing work. Grant Programs. DHS has developed and administered grant programs for various surface transportation modes, although stakeholders have raised concerns regarding the current grant process. For example, the DHS Office of Grants and Training, now called the Grant Programs Directorate, has used various programs to fund passenger rail security since 2003. Through the Urban Areas Security Initiative grant program, the Grant Programs Directorate has provided grants to urban areas to help enhance their overall security and preparedness level to prevent, respond to, and recover from acts of terrorism. The Grant Programs Directorate used fiscal year 2005, 2006, and 2007 appropriations to build on the work under way through the Urban Areas Security Initiative program, and create and administer new programs focused specifically on transportation security, including the Transit Security Grant Program, Intercity Passenger Rail Security Grant Program, and the Freight Rail Security Grant Program. However, some industry stakeholders have raised concerns regarding DHS’s current grant process, including the shifting of funding priorities, the lack of program flexibility, and other barriers to the provision of grant funding. For example, transit agencies have reported that the lack of predictability in how TSA will assess grant projects against funding priorities makes it difficult to engage in long-term planning of security initiatives. Specifically, transit agencies have reported receiving funding to begin projects—such as retrofitting their transit fleet with security cameras or installing digital video recording systems—but not being able to finish these projects in subsequent years because TSA had changed its funding priorities. The Implementing Recommendations of the 9/11 Commission Act codifies surface transportation grant programs and imposes statutory requirements on the administration of the programs. For example, the act lists authorized uses of these grant funds and requires DHS to award the grants based on risk. It also requires that DHS and DOT determine the most effective and efficient way to distribute grant funds, authorizing DHS to transfer funds to DOT for the purpose of disbursement. According to the TSA fiscal year 2009 budget justification, to ensure that the selected projects are focused on increasing security, DHS grants are to be awarded based on risk. We will continue assessing surface transportation related grant programs as part of our ongoing work. Our work has identified homeland security challenges that cut across DHS’s mission and core management functions. These issues have impeded the department’s progress since its inception and will continue to confront DHS as it moves forward. These issues include (1) establishing baseline performance goals and measures and engaging in effective strategic planning efforts; (2) applying and strengthening a risk- management approach for implementing missions and making resource allocation decisions; and, (3) coordinating and partnering with federal, state, and local agencies, and the private sector. We have made numerous recommendations to DHS and its components, including TSA, to strengthen these efforts, and the department has made progress in implementing some of these recommendations. DHS has not always implemented effective strategic planning efforts and has not yet fully developed performance measures or put into place structures to help ensure that the agency is managing for results. For example, with regard to TSA’s efforts to secure air cargo, we reported in October 2005 and April 2007 that TSA completed an Air Cargo Strategic Plan in November 2003 that outlined a threat-based risk-management approach to securing the nation’s domestic air cargo system, and that this plan identified strategic objectives and priority actions for enhancing air cargo security based on risk, cost, and deadlines. However, TSA had not developed a similar strategy for addressing the security of inbound air cargo—cargo transported into the United States from foreign countries— including how best to partner with CBP and international air cargo stakeholders. In another example, we reported in April 2007 that TSA had not yet developed outcome-based performance measures for its foreign airport assessment and air carrier inspection programs, such as the percentage of security deficiencies that were addressed as a result of TSA’s on-site assistance and recommendations, to identify any aspects of these programs that may need attention. We recommended that DHS direct TSA and CBP to develop a risk-based strategy, including specific goals and objectives, for securing air cargo; and develop outcome-based performance measures for its foreign airport assessment and air carrier inspection programs. DHS generally concurred with GAO’s recommendations with regard to air cargo, and is taking steps to strengthen its efforts in this area. Although DHS and TSA have made risk-based decision-making a cornerstone of departmental and agency policy, DHS and TSA could strengthen their application of risk management in implementing their mission functions. Several DHS component agencies and TSA have worked towards integrating risk-based decision making into their security efforts, but we reported that these efforts can be strengthened. For example, TSA has incorporated certain risk-management principles into securing air cargo, but has not completed assessments of air cargo vulnerabilities or critical assets—two crucial elements of a risk-based approach. TSA has also incorporated risk-based decision making when making modifications to airport checkpoint screening procedures, to include modifying procedures based on intelligence information and vulnerabilities identified through covert testing at airport checkpoints. However, in April 2007, we reported that TSA’s analyses that supported screening procedural changes could be strengthened. For example, TSA officials based their decision to revise the prohibited items list to allow passengers to carry small scissors and tools onto aircraft based on their review of threat information—which indicated that these items do not pose a high risk to the aviation system—so that TSOs could concentrate on higher threat items. However, TSA officials did not conduct the analysis necessary to help them determine whether this screening change would affect TSO’s ability to focus on higher-risk threats. As noted earlier in this statement, TSA is taking steps to strengthen its efforts in both of these areas. In addition to providing federal leadership with respect to homeland security, DHS also plays a large role in coordinating the activities of key stakeholders, but has faced challenges in this regard. Although improvements are being made, we have found that the appropriate homeland security roles and responsibilities within and between the levels of government, and with the private sector, are evolving and need to be clarified. For example, we reported that opportunities exist for TSA to work with foreign governments and industry to identify best practices for securing passenger rail and air cargo, and recommended that TSA systematically compile and analyze information on practices used abroad to identify those that may strengthen the department’s overall security efforts. With regard to air cargo, TSA has subsequently reviewed the models used in two foreign countries that rely on government-certified screeners to screen air cargo to facilitate the design of the agency’s proposed certified-cargo screening program. Further, in September 2005, we reported that TSA did not effectively involve private sector stakeholders in its decision making process for developing security standards for passenger rail assets. We recommended that DHS develop security standards that reflect industry best practices and can be measured, monitored, and enforced by TSA rail inspectors and, if appropriate, rail asset owners. DHS agreed with these recommendations. Regarding efforts to respond to in-flight security threats, which, depending on the nature of the threat, could involve more than 15 federal agencies and agency components, in July 2007 we also recommended that DHS and other departments document and share their respective coordination and communication strategies and response procedures, to which DHS agreed. The Implementing Recommendations of the 9/11 Commission Act includes provisions designed to improve coordination with stakeholders. For example, the act requires DHS and DOT to develop an annex to the Memorandum of Understanding between the two departments governing the specific roles, responsibilities, resources, and commitments in addressing motor carrier transportation security matters, including the processes the departments will follow to promote communications and efficiency, and avoid duplication of effort. The act also requires DHS, in consultation with DOT, to establish a program to provide appropriate information that DHS has gathered or developed on the performance, use, and testing of technologies that may be used to enhance surface transportation security to surface transportation entities. According to TSA, the agency has begun to provide transit agencies with information on recommended available security technologies through security roundtables for the top 50 transit agencies; the posting of an authorized equipment list on the Homeland Security Information Network Web site; and periodic briefings to other federal agencies. The magnitude of DHS’s and TSA’s responsibilities in securing the nation’s transportation system is significant, and we commend the department on the work it has done and is currently doing to secure this network. Nevertheless, given the dominant role that TSA plays in securing the homeland, it is critical that the agency continually strive to strengthen its programs and initiatives to counter emerging threats and improve security. In the almost 6-½ years since its creation, TSA has had to undertake its critical mission while also establishing and forming a new agency. At the same time, a variety of factors, including threats to and attacks on transportation systems around the world, as well as new legislative requirements, have led the agency to reassess its priorities and reallocate resources to address key events, and to respond to emerging threats. Although TSA has made considerable progress in addressing key aspects of commercial aviation security, more work remains in some key areas, such as the deployment of technologies to detect explosives at checkpoints and in air cargo. Further, although TSA has more recently taken action in a number of areas to help secure surface modes of transportation, its efforts are still largely in the early stage, and the nature of its regulatory role and relationship with transportation operators is still being defined. As DHS and TSA move forward, it will be important for the department to address the challenges that have affected its operations thus far, while continuing to adapt to new threats and needs, and well as increase the effectiveness and efficiency of existing programs and operations. We will continue to review DHS’s and TSA’s progress in securing the transportation network, and will provide information to Congress and the public on these efforts. Madam Chairwoman this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have at this time. For further information on this testimony, please contact Cathleen Berrick at (202) 512- 3404 or at [email protected]. Individuals making key contributions to this testimony include Steve D. Morris, Assistant Director; Jason Berman; Kristy Brown; Martene Bryan; Tony Cheesebrough; Fatema Choudhury; Chris Currie; Joe Dewechter; Dorian Dunbar; Barbara Guffy; John Hansen; Dawn Hoff; Daniel Klabunde; Anne Laffoon; Gary Malavenda; Sara Margraf; Victoria Miller; Dan Rodriguez; Maria Strudwick; Spencer Tacktill; Gabriele A. Tonsil; Margaret A. Ullengren; Margaret Vo; and Su Jin Yon. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Within the Department of Homeland Security (DHS), the Transportation Security Administration's (TSA) mission is to protect the nation's transportation network. Since its inception in 2001, TSA has developed and implemented a variety of programs and procedures to secure commercial aviation and surface modes of transportation. Other DHS components, federal agencies, state and local governments, and the private sector also play a role in transportation security. GAO has examined (1) the progress TSA and other DHS components have made in securing the nation's aviation and surface transportation systems, and the challenges that remain, and (2) crosscutting issues that have impeded TSA's efforts in strengthening security. This testimony is based on GAO reports and testimonies issued from February 2004 to February 2008 and ongoing work regarding the security of the nation's aviation and surface transportation systems, as well as selected updates to this work conducted in April 2008. To conduct this work, GAO reviewed documents related to TSA security efforts and interviewed TSA and transportation industry officials. DHS, primarily through TSA, has made progress in securing the aviation and surface transportation networks, but more work remains. With regard to commercial aviation, TSA has undertaken efforts to strengthen airport security; hire, train, and measure the performance of it screening workforce; prescreen passengers against terrorist watch lists; and screen passengers, baggage, and cargo. With regard to surface transportation modes, TSA has taken steps to develop a strategic approach for securing mass transit, passenger and freight rail, commercial vehicles, and highways; establish security standards for certain transportation modes; and conduct threat, criticality, and vulnerability assessments of surface transportation assets, particularly passenger and freight rail. TSA also hired and deployed compliance inspectors and conducted inspections of passenger and freight rail systems. While these efforts have helped to strengthen the security of the transportation network, DHS and TSA still face a number of key challenges in further securing these systems. For example, regarding commercial aviation, although TSA has made significant progress in its development of an advanced passenger prescreening system, known as Secure Flight, challenges remain, including unreliable program cost and schedule estimates, among other things. In addition, TSA's efforts to enhance perimeter security at airports may not be sufficient to provide for effective security. For example, TSA has initiated efforts to evaluate the effectiveness of security-related technologies, such as biometric identification systems, but has not developed a plan for guiding airports with respect to future technology enhancements. While TSA is pursuing the procurement of several checkpoint technologies to address key existing vulnerabilities, it has not deployed technologies on a wide-scale basis, and has not yet developed and implemented technologies needed to screen air cargo. Further, TSA's efforts to develop security standards for surface transportation modes have been limited to passenger and freight rail, and TSA has not determined what its regulatory role will be with respect to commercial vehicles or highway infrastructure. A number of crosscutting issues have impeded DHS's and TSA's efforts to secure the transportation network, including the need to strengthen strategic planning and performance measurement, and more fully adopt and apply risk-based principles in the pursuit of its security initiatives.
Congress authorized State’s ATA program in 1983 through the Foreign Assistance Act. According to the legislation the purpose of ATA is “(1) to enhance the antiterrorism skills of friendly countries by providing training and equipment to deter and counter terrorism; (2) to strengthen the bilateral ties of the United States with friendly governments by offering concrete assistance in this area of great mutual concern; and (3) to increase respect for human rights by sharing with foreign civil authorities modern, humane, and effective antiterrorism techniques.” ATA offers a wide range of counterterrorism assistance to partner nations, but most assistance consists of (1) training courses on tactical and strategic counterterrorism issues and (2) grants of counterterrorism equipment, such as small arms, bomb detection equipment, vehicles, and computers. ATA curricula and training focus on enhancing critical counterterrorism capabilities, which cover issues such as crisis management and response, cyberterrorism, dignitary protection, and related areas. According to DS/T/ATA, all its courses emphasize law enforcement under the rule of law and sound human rights practices. ATA is State’s largest counterterrorism program, and receives appropriations under the Nonproliferation, Anti-Terrorism, Demining, and Related Programs account. Fiscal year 2002 appropriations for ATA increased to about $158 million—over six times the level of funding appropriated in fiscal year 2000. Appropriations have fluctuated since fiscal year 2002, but increased to almost $171 million in fiscal year 2007. From fiscal years 2002 to 2007, program assistance for the top 10 recipients of ATA allocations ranged from about $11 million to about $78 million. The top 10 recipients represented about 57 percent of ATA funding allocated for training and training-related activities over the 6-year period. ATA funding for the other 89 partner nations that received assistance during this period ranged from $9,000 to about $10.7 million. The Coordinator for Counterterrorism, the head of S/CT, is statutorily charged with the overall supervision (including policy oversight of resources) and coordination of the U.S. government’s counterterrorism activities. The broadly mandated role of the Assistant Secretary for Diplomatic Security, the head of the Bureau of Diplomatic Security, includes implementing security programs to protect diplomatic personnel and advise chiefs of mission on security matters. Specific roles and responsibilities for S/CT and DS/T/ATA regarding ATA are described in a 1991 internal policy guidance memorandum, the Omnibus Diplomatic Security Act of 1986, and incorporated into State’s Foreign Affairs Manual. S/CT is responsible for leading the initial assessment of a partner nation’s counterterrorism needs, and DS/T/ATA is responsible for developing annual, country-specific plans. Under current program operations, DS/T/ATA conducts an initial assessment of a new participant nation’s counterterrorism capabilities, and conducts subsequent assessments— referred to as program reviews—every 2 to 3 years thereafter. In general, the needs assessments include input from the embassy teams, but the assessments themselves are conducted by technical experts contracted by DS/T/ATA. According to DS/T/ATA, the purpose of the needs assessment and program review process is to determine the forms of assistance for a partner nation to detect, deter, deny, and defeat terrorism; and to evaluate program effectiveness. S/CT provides minimal policy guidance to DS/T/ATA to help determine assistance priorities and ensure that it supports broader U.S. policy goals. In addition, S/CT and DS/T/ATA did not systematically use country- specific needs assessments and program reviews to plan what types of assistance to provide partner nations in accordance with State policy guidance. The assessments we reviewed had weaknesses and inconsistencies. According to State officials, S/CT places countries on a tiered list in one of four priority categories based on criteria that address several factors, including country-specific threats and the level and depth of diplomatic and political engagement in a country. State officials indicated that other factors also may be considered in determining whether and where a country is placed on the list, such as the presence of a U.S. military base or a planned international sporting or cultural event with U.S. participation. Since 2006, S/CT has reviewed and discussed the tiered list—including changes, additions, or deletions—with DS/T/ATA during quarterly meetings. In addition to the quarterly meetings, an S/CT official told us that they had established a series of regional roundtable discussions in 2006 between S/CT regional subject experts and DS/T/ATA counterparts. According to the S/CT official, the roundtables were intended as a means of identifying priority countries and their counterterrorism needs for purposes of developing budget requests. S/CT provides little guidance to DS/T/ATA beyond the tiered list, although the 1991 State policy guidance memorandum states that S/CT’s written policy guidance for the program should include suggested country training priorities. While S/CT provides some additional guidance to DS/T/ATA during quarterly meetings and on other occasions, DS/T/ATA officials in headquarters and the field stated they received little or no guidance from S/CT beyond the tiered list. As a result, neither S/CT nor DS/T/ATA could ensure that program assistance provided to specific countries supports broader U.S. antiterrorism policy goals. Other factors beyond S/CT’s tiered list of countries, such as unforeseen events or new governmental initiatives, also influence which countries receive program assistance. We found that 10 countries on the tiered list did not receive ATA assistance in fiscal year 2007, while 13 countries not on the tiered list received approximately $3.2 million. S/CT and DS/T/ATA officials stated that assistance does not always align with the tiered list because U.S. foreign policy objectives sometimes cause State, in consultation with the President’s National Security Council, to provide assistance to a non-tiered-list country. According to the 1991 State policy guidance memorandum and DS/T/ATA standard operations procedures, ATA country-specific needs assessments and program reviews are intended to guide program management and planning. However, S/CT and DS/T/ATA did not systematically use the assessments to determine what types of assistance to provide to partner nations or develop ATA country-specific plans. Although the 1991 State policy memorandum states that S/CT should lead the assessment efforts, a senior S/CT official stated that S/CT lacks the capacity to do so. As a result, DS/T/ATA has led interagency assessment teams in recent years, but the assessments and recommendations for types of assistance to be provided may not fully reflect S/CT policy guidance concerning overall U.S. counterterrorism priorities. DS/T/ATA officials responsible for five of the top six recipients of ATA support—Colombia, Kenya, Indonesia, Pakistan, and the Philippines—did not consistently use ATA country needs assessments and program reviews in making program decisions or to create annual country assistance plans. In some instances, DS/T/ATA officials responsible for in-country programs had not seen the latest assessments for their respective countries, and some said that the assessments they had reviewed were either not useful or that they were used for informational purposes only. The Regional Security Officer, Deputy Regional Security Officer, and DS/T/ATA Program Manager for Kenya had not seen any of the assessments that had been conducted for the country since 2000. Although the in-country program manager for Kenya was familiar with the assessments from her work in a previous position with DS/T/ATA, she stated that in general, the assessments were not very useful for determining what type of assistance to provide. She said that the initial needs assessment for Kenya failed to adequately consider local needs and capacity. The Regional Security Officer and Assistant Regional Security Officer for Indonesia stated they had not seen the latest assessment for the country. The DS/T/ATA program manager for Indonesia said that he recalled using one of the assessments as a “frame of reference” in making program and resource decisions. The in-country program manager also recalled seeing one of the assessments, but stated that he did not find the assessment useful given the changing terrorist landscape; therefore, he did not share it with his staff. The DS/T/ATA Program Manager for Pakistan stated that decisions on the types of assistance to provide in Pakistan were based primarily on the knowledge and experience of in-country staff regarding partner nation needs, rather than the needs assessments or program reviews. He added that he did not find the assessments useful, as the issues identified in the latest (2004) assessment for the country were outdated. We reviewed 12 of the 21 ATA country-specific needs assessments and program reviews that, according to ATA annual reports, DS/T/ATA conducted between 2000 and 2007 for five of the six in-country programs. The assessments and reviews generally included a range of recommendations for counterterrorism assistance, but did not prioritize assistance to be provided or include specific timeframes for implementation. Consequently, the assessments did not consistently provide a basis for targeting program assistance to the areas of a partner nation’s greatest counterterrorism assistance need. Only two of the assessments—a 2000 needs assessment for Indonesia and a 2003 assessment for Kenya—prioritized the recommendations, although a 2004 assessment for Pakistan and a 2005 assessment for the Philippines listed one or two recommendations as priority ATA efforts. In addition, the information included in the assessments was not consistent and varied in linking recommendations to capabilities. Of the 12 assessments we reviewed: Nine included narrative on a range of counterterrorism capabilities, such as border security and explosives detection, but the number of capabilities assessed ranged from 5 to 25. Only four of the assessments that assessed more than one capability linked recommendations provided to the relevant capabilities. Six included capability ratings, but the types of ratings used varied. For example, a 2003 assessment for Colombia rated eight capabilities from 1 through 5, but the 2004 assessment rated 24 capabilities, using poor, low, fair, or good. Two used a format that DS/T/ATA began implementing in 2001. The assessments following the new format generally included consistent types of information and clearly linked recommendations provided to an assessment of 25 counterterrorism capabilities. However, they did not prioritize recommendations or include specific timeframes for implementing the recommendations. Although the 1991 State policy memorandum states that DS/T/ATA should create annual country assistance plans that specify training objectives and assistance to be provided based upon the needs assessments and program reviews, we found that S/CT and DS/T/ATA did not systematically use the assessments to create annual plans for the five in-country programs. DS/T/ATA officials we interviewed regarding the five in-country programs stated that in lieu of relying on the assessments or country assistance plans, program and resource decisions were primarily made by DS/T/ATA officials in the field based on their knowledge and experience regarding partner nation needs. Some DS/T/ATA officials said they did not find the country assistance plans useful. The program manager for Pakistan stated that he used the country assistance plan as a guide, but found that it did not respond to changing needs in the country. The ATA program manager for Kenya said that he had not seen a country assistance plan for that country. We requested ATA country assistance plans conducted during fiscal years 2000-2006 for the five in-country programs included in our review, but S/CT and DS/T/ATA only provided three plans completed for three of the five countries. Of these, we found that the plans did not link planned activities to recommendations provided in the needs assessments and program reviews. For example, the plan for the Philippines included a brief reference to a 2005 needs assessment, but the plan did not identify which recommendations from the 2005 assessment were intended to be addressed by current or planned efforts. S/CT has mechanisms to coordinate the ATA program with other U.S. government international counterterrorism training assistance and to help avoid duplication of efforts. S/CT chairs biweekly interagency working group meetings of the Counterterrorism Security Group’s Training Assistance Subgroup to provide a forum for high-level information sharing and discussion among U.S. agencies implementing international counterterrorism efforts. S/CT also established the Regional Strategic Initiative in 2006 to coordinate regional counterterrorism efforts and strategy. S/CT described the Regional Strategic Initiative as a series of regionally based, interagency meetings hosted by U.S. embassies to identify key regional counterterrorism issues and develop a strategic approach to addressing them, among other goals. In the four countries we visited, we did not find any significant duplication or overlap among U.S. agencies’ country-specific training programs aimed at combating terrorism. Officials we met with in each of these countries noted that they participated in various embassy working group meetings, such as Counterterrorism Working Group and Law Enforcement Working Group meetings, during which relevant agencies shared information regarding operations and activities at post. DS/T/ATA officials also coordinated ATA with other counterterrorism efforts through daily informal communication among cognizant officials in the countries we visited. In response to concerns that ATA lacked elements of adequate strategic planning and performance measurement, State took action to define goals and measures related to the program’s mandated objectives. S/CT and DS/T/ATA, however, did not systematically assess sustainability—that is, the extent to which assistance has enabled partner nations to achieve and maintain advanced counterterrorism capabilities. S/CT and DS/T/ATA lacked clear measures and processes for assessing sustainability, and program managers did not consistently include sustainability in ATA planning. State did not have measurable performance goals and outcomes related to the mandated objectives for ATA prior to fiscal year 2003, but has recently made some progress to address the deficiency, which had been noted in reports by State’s Office of Inspector General. Similarly, State developed specific goals and measures for each of the program’s mandated objectives in response to a 2003 Office of Management and Budget assessment. Since fiscal year 2006, State planning documents, including department and bureau-level performance plans, have stated that enabling partner nations to achieve advanced and sustainable counterterrorism capabilities is a key outcome. S/CT and DS/T/ATA officials further confirmed that sustainability is the principal intended outcome and focus of program assistance. In support of these efforts, DS/T/ATA appointed a Sustainment Manager in November 2006 to, among other things, coordinate with other DS/T/ATA divisions to develop recommendations and plans to assist partner nations in developing sustainable counterterrorism capabilities. Despite progress towards establishing goals and intended outcomes, State had not developed clear measures and a process for assessing sustainability and had not integrated the concept into program planning. The Government Performance and Results Act of 1993 requires agencies in charge of U.S. government programs and activities to identify goals and report on the degree to which goals are met. S/CT and DS/T/ATA officials noted the difficulty in developing direct quantitative measures of ATA outcomes related to partner nations’ counterterrorism capabilities. Our past work also has stressed the importance of establishing program goals, objectives, priorities, milestones, and measures to use in monitoring performance and assessing outcomes as critical elements of program management and effective resource allocation. We found that the measure for ATA’s principal intended program outcome of sustainability is not clear. In its fiscal year 2007 Joint Performance Summary, State reported results and future year targets for the number of countries that had achieved an advanced, sustainable level of counterterrorism capability. According to the document, partner nations that achieve a sustainable level of counterterrorism would graduate from the program and no longer receive program assistance. However, program officials in S/CT and DS/T/ATA directly responsible for overseeing ATA were not aware that the Joint Performance Summary listed numerical targets and past results for the number of partner nations that had achieved sustainability, and could not provide an explanation of how State assessed the results. DS/T/ATA’s Sustainment Manager also could not explain how State established and assessed the numerical targets in the reports. The Sustainment Manager further noted that, to his knowledge, S/CT and DS/T/ATA had not yet developed systematic measures of sustainability. DS/T/ATA’s mechanism for evaluating partner nation capabilities did not include guidance or specific measures to assess sustainability. According to program guidance and DS/T/ATA officials, needs assessments and program reviews are intended to establish a baseline of a partner nation’s counterterrorism capabilities and quantify progress through subsequent reviews. DS/T/ATA officials also asserted that the process is intended to measure the results of program assistance. However, the process did not explicitly address sustainability, and provided no specific information or instruction regarding how reviewers are to assess sustainability. Moreover, the process focused on assessing a partner nation’s overall counterterrorism capabilities, but did not specifically measure the results of program assistance. DS/T/ATA had not systematically integrated sustainability into country- specific assistance plans, and we found a lack of consensus among program officials about how to address the issue. In-country program managers, embassy officials, instructors, and partner nation officials we interviewed held disparate views on how to define sustainability across all ATA participant countries, and many were not aware that sustainability was the intended outcome. Several program officials stated that graduating a country and withdrawing or significantly reducing program assistance could result in a rapid decline in the partner nation’s counterterrorism capabilities, and could undermine other program objectives, such as improving bilateral relations. Further, although State has listed sustainability in State-level planning documents since 2006, S/CT and DS/T/ATA had not issued guidance on incorporating sustainability into country-specific planning, and none of the country assistance plans we reviewed consistently addressed the outcome. As a result, the plans did not include measurable annual objectives targeted at enabling the partner nation to achieve sustainability. For example, Colombia’s assistance plan listed transferring responsibility for the antikidnapping training to the Colombian government and described planned activities to achieve that goal. However, the plan did not include measurable objectives to determine whether activities achieved intended results. Since 1996, State has not complied with a congressional mandate to report to Congress on U.S. international counterterrorism assistance. Additionally, State’s annual reports on ATA contained inaccurate data regarding basic program information, did not provide systematic assessments of program results, and lacked other information necessary to evaluate program effectiveness. In 1985, Congress amended the Foreign Assistance Act requiring the Secretary of State to report on all assistance related to international terrorism provided by the U.S. government during the preceding fiscal year. Since 1996, State has submitted ATA annual reports rather than the broader report required by the statute. A S/CT official noted confusion within State over what the statute required and he asserted that the ATA annual report, which is prepared by DS/T/ATA, and State’s annual “Patterns of Global Terrorism” report were sufficiently responsive to congressional needs. He further noted that, in his view, it would be extremely difficult for State to compile and report on all U.S. government terrorism assistance activities, especially given the significant growth of agencies’ programs since 2001. Officials in State’s Bureau of Legislative Affairs indicated that, to their knowledge, they had never received an inquiry from congressional staff about the missing reports. Recent ATA annual reports have contained inaccurate data relating to basic program information on numbers of students trained and courses offered. For example, Afghanistan. According to annual reports for fiscal years 2002 to 2005, 15 Afghan students were trained as part of a single training event over the 4-year period. DS/T/ATA subsequently provided us data for fiscal year 2005, which corrected the participation total in that year from 15 participants in 1 training event to 1,516 participants in 12 training events. Pakistan. According to the fiscal year 2005 ATA annual report, ATA delivered 17 courses to 335 participants in Pakistan. Supporting tables in the same report listed 13 courses provided to 283 participants, and a summary report provided to us by DS/T/ATA reported 13 courses provided to 250 course participants. DS/T/ATA officials acknowledged the discrepancies and noted that similar inaccuracies could be presumed for prior years and for other partner nations. The officials indicated that inaccuracies and omissions in reports of the training participants and events were due to a lack of internal policies and procedures for recording and reporting program data. In the absence of documented policies and procedures, staff developed various individual processes for collecting the information that resulted in flawed data reporting. Additionally, DS/T/ATA officials told us that its inadequate information management system and a lack of consistent data collection procedures also contributed to inaccurate reporting. We reviewed ATA annual reports for fiscal years 1997 through 2005, and found that the reports varied widely in terms of content, scope, and format. Moreover, the annual reports did not contain systematic assessments of program performance or consistent information on program activity, such as number and type of courses delivered, types of equipment provided, and budget activity associated with program operations. In general, the reports contained varying levels of detail on program activity, and provided only anecdotal examples of program successes, from a variety of sources, including U.S. embassy officials, ATA instructors, and partner nation officials. DS/T/ATA program officials charged with compiling the annual reports for the past 3 fiscal years noted that DS/T/ATA did not have guidance on the scope, content, or format for the reports. Although ATA plays a central role in State’s broader effort to fight international terrorism, deficiencies in how the program is guided, managed, implemented, and assessed could limit the program’s effectiveness. Specifically, minimal guidance from S/CT makes it difficult to determine the extent to which program assistance directly supports broader U.S. counterterrorism policy goals. Additionally, deficiencies with DS/T/ATA’s needs assessments and program reviews may limit their utility as a tool for planning assistance and prioritizing among several partner nations’ counterterrorism needs. As a result, the assessments and reviews are not systematically linked to resource allocation decisions, which may limit the program’s ability to improve partner nation’s counterterrorism capabilities. Although State has made some progress in attempting to evaluate and quantitatively measure program performance, ATA still lacks a clearly defined, systematic assessment and reporting of outcomes, which makes it difficult to determine the overall effectiveness of the program. This deficiency, along with State’s noncompliance with mandated reporting requirements, has resulted in Congress having limited and incomplete information on U.S. international counterterrorism assistance and ATA efforts. Such information is necessary to determine the most effective types of assistance the U.S. government can provide to partner nations in support of the U.S. national security goal of countering terrorism abroad. In our February 2008 report, we suggested that Congress should reconsider the requirement that the Secretary of State provide an annual report on the nature and amount of U.S. government counterterrorism assistance provided abroad, given the broad changes in the scope and nature of U.S. counterterrorism assistance abroad in conjunction with the fact that the report has not been submitted since 1996. We also recommended that the Secretary of State take the following four actions: 1. Revisit and revise internal guidance (the 1991 State policy memorandum and Foreign Affairs Manual, in particular) to ensure that the roles and responsibilities for S/CT and DS/T/ATA are still relevant and better enable State to determine which countries should receive assistance and what type, and allocate limited ATA resources. 2. Ensure that needs assessments and program reviews are both useful and linked to ATA resource decisions and development of country- specific assistance plans. 3. Establish clearer measures of sustainability, and refocus the process for assessing the sustainability of partner nations’ counterterrorism capabilities. The revised evaluation process should include not only an overall assessment of partner nation counterterrorism capabilities, but also provide guidance for assessing the specific outcomes of ATA. 4. Comply with the congressional mandate to report to Congress on U.S. international counterterrorism assistance. In commenting on our report, State agreed overall with our principal findings and recommendations to improve its ATA program guidance, the needs assessment and program review process, and its assessments of ATA program outcomes. State noted that the report highlighted the difficulties in assessing the benefits of developing and improving long-term antiterrorism and law enforcement relationships with foreign governments. State also outlined a number of ongoing and planned initiatives to address our recommendations. As noted in our report, we will follow up with State to ensure that these initiatives have been completed, as planned. Although State supported the matter we suggested for congressional consideration, it did not specifically address our recommendation that it comply with the congressional mandate to report on U.S. counterterrorism assistance. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. For questions regarding this testimony, please contact Charles Michael Johnson, Jr. (202) 512-7331 or [email protected]. Albert H. Huntington, III, Assistant Director; Matthew E. Helm; Elisabeth R. Helmer; and Emily Rachman made key contributions in preparing this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of State's (State) Antiterrorism Assistance (ATA) program's objectives are to provide partner nations with counterterrorism training and equipment, improve bilateral ties, and increase respect for human rights. State's Office of the Coordinator for Counterterrorism (S/CT) provides policy guidance and its Bureau of Diplomatic Security, Office of Antiterrorism, Assistance (DS/T/ATA), manages program operations. GAO assessed (1) State's guidance for determining ATA priorities, (2) how State coordinates ATA with other counterterrorism programs, (3) the extent State established ATA program goals and measures, and (4) State's reporting on U.S. counterterrorism assistance. This statement is based on a February 2008, GAO report titled Combating Terrorism: State Department's Antiterrorism Program Needs Improved Guidance and More Systematic Assessments of Outcomes, GAO-08-336 (Washington, D.C.: Feb. 29, 2008). S/CT provides minimal guidance to help prioritize ATA program recipients, and S/CT and DS/T/ATA did not systematically align ATA assistance with U.S. assessments of foreign partner counterterrorism needs. S/CT provided policy guidance to DS/T/ATA through quarterly meetings and a tiered list of priority countries, but the list did not provide guidance on country counterterrorism-related program goals, objectives, or training priorities. S/CT and DS/T/ATA also did not consistently use country-specific needs assessments and program reviews to plan assistance. S/CT had established mechanisms to coordinate the ATA program with other U.S. international efforts to combat terrorism. S/CT held interagency meetings with officials from the Department of State, Defense, Justice, and Treasury and other agencies as well as ambassador-level regional strategic coordinating meetings. GAO did not find any significant duplication or overlap among the various U.S. international counterterrorism efforts. State had made progress in establishing goals and intended outcomes for the ATA program, but S/CT and DS/T/ATA did not systematically assess the outcomes and, as a result, could not determine the effectiveness of program assistance. For example, although sustainability is a principal focus, S/CT and DS/T/ATA had not set clear measures of sustainability or integrated sustainability into program planning. State reporting on U.S. counterterrorism assistance abroad was incomplete and inaccurate. S/CT had not provided a congressionally mandated annual report to Congress on U.S. government-wide assistance related to combating international terrorism since 1996. After 1996, S/CT has only submitted to Congress annual reports on the ATA program, such as the number of students trained and courses offered. Moreover, these reports contained inaccurate program information. Additionally, the reports lacked comprehensive information of the results on program assistance that would be useful to Congress.
There are two approaches for reorganizing or terminating a large financial company. Large financial companies may be reorganized or liquidated under a judicial bankruptcy process or resolved under special legal and regulatory resolution regimes that have been created to address insolvent financial entities such as insured depository institutions and insurance companies. Bankruptcy is a federal court procedure, the goal of which is to help individuals and businesses eliminate or restructure debts they cannot repay and help creditors receive some payment in an equitable manner. Generally the filing of a bankruptcy petition operates as an automatic stay; that is, it stops most lawsuits, foreclosures, and other collection activities against the debtor. Equitable treatment of creditors means all creditors with substantially similar claims are classified similarly and receive the same treatment. For example, a class of secured creditors— those with liens or other secured claims against the debtor’s property— will receive similar treatment as to their secured claims. Business debtors may seek liquidation, governed primarily by Chapter 7 of the Code, or reorganization, governed by Chapter 11. Proceedings under Chapters 7 and 11 can be voluntary (initiated by the debtor) or involuntary (generally initiated by at least three creditors holding at least a certain minimum amount of claims against the debtor). In an involuntary proceeding, the debtor can defend against the proceeding, including presenting objections. The judge subsequently decides whether to grant the creditors’ request and permit the bankruptcy to proceed, dismiss the request, or enter any other appropriate order. A Chapter 7 proceeding is a court-supervised procedure by which a trustee takes over the assets of the debtor’s estate subject to limited exemptions, reduces them to cash, and makes distributions to creditors, subject to the rights of secured creditors to the collateral securing their loans to the debtor. A reorganization proceeding under Chapter 11 allows debtors to continue some or all of their operations subject to court supervision as a way to satisfy creditor claims. The debtor typically remains in control of its assets, and is called a debtor-in-possession (DIP). Under certain circumstances, the court can direct the U.S. Trustee to appoint a Chapter 11 trustee to take over the affairs of the debtor. As shown in figure 1, a firm going through a Chapter 11 bankruptcy generally will pass through several stages. Among these are: First-day motions. The most common first-day motions relate to the continued operation of the debtor’s business and involve matters such as requests to use cash collateral—liquid assets on which secured creditors have a lien or claim—and obtaining financing, if any. Disclosure. The disclosure statement must include information on the debtor’s assets, liabilities, and business affairs sufficient to enable creditors to make informed judgments about how to vote on the debtor’s reorganization plan and must be approved by the bankruptcy court. Plan of reorganization. A debtor has an exclusive right to file a plan of reorganization within the first 120 days of bankruptcy. The plan describes how the debtor intends to reorganize and treat its creditors. The plan divides claims against the debtor into separate classes and specifies the treatment each class will receive. The court may confirm the plan if, among other things, each class of allowed creditors has accepted the plan or the class is not impaired by the plan. If not all classes of impaired creditors vote to accept the plan, the court can still confirm the plan if it is shown that it is fair to all impaired creditors. Reorganization. Possible outcomes, which can be used in combination, include (1) distribution under a plan of the proceeds of a pre-plan sale of the assets of the company (in whole or in part), sometimes called a section 363 sale. Section 363 of the Code permits sales that are free and clear of creditor claims of property of the estate; (2) liquidation of the company’s assets with approval of the court, through means other than a 363 sale; and (3) reorganization of the company, in which it emerges from bankruptcy with new contractual rights and obligations that replace or supersede those it had before filing for bankruptcy protection. The debtor, creditors, trustee, or other interested parties may initiate adversary proceedings—in effect, a lawsuit within the bankruptcy case to preserve or recover money or property, to subordinate a claim of another creditor to their own claims, or for similar reasons. Furthermore, the The U.S. bankruptcy system involves multiple federal entities. Bankruptcy courts are located in 90 federal judicial districts; however, as we reported in 2011, the Southern District of New York and the District of Delaware adjudicate a majority of larger corporate or business bankruptcy cases. The Judicial Conference of the United States serves as the judiciary’s principal policymaking body and recommends national policies on all aspects of federal judicial administration. In addition, AOUSC serves as the central administrative support entity for the Judicial Conference and the federal courts, including bankruptcy courts. The Federal Judicial Center is the education and research agency for the federal courts and assists bankruptcy courts with reports and assessments relating to the administration and management of bankruptcy cases. Finally, the Department of Justice’s U.S. Trustee Program and the judiciary’s Bankruptcy Administrator Program oversee bankruptcy trustees and promote integrity and efficiency in the bankruptcy system by overseeing the administration of bankruptcy estates. A preference action can be asserted for payments made to an insider within a year prior to the bankruptcy filing. Large, complex financial companies that are eligible to file for bankruptcy generally file under Chapter 11 of the Code. Such companies operating in the United States engage in a range of financial services activities. Many are organized under both U.S. and foreign laws. The U.S. legal structure is frequently premised on a parent holding company owning regulated subsidiaries (such as depository institutions, insurance companies, broker-dealers, and commodity brokers) and nonregulated subsidiaries that engage in financial activities. Certain financial institutions may not file as debtors under the Code and other entities face special restrictions in using the Code: Insured depository institutions. Under the Federal Deposit Insurance Act, FDIC serves as the conservator or receiver for insured depository institutions placed into conservatorship or receivership under applicable law. Insurance companies. Insurers generally are subject to oversight by state insurance commissioners, who have the authority to place them into conservatorship, rehabilitation, or receivership. Broker-dealers. Broker-dealers can be liquidated under the Securities Investor Protection Act (SIPA) or under a special subchapter of Chapter 7 of the Code. However, broker-dealers may not file for reorganization under Chapter 11. Commodity brokers. Commodity brokers, which include futures commission merchants, foreign futures commission merchants, clearing organizations, and certain other entities in the derivatives industry, can only use a special subchapter of Chapter 7 for bankruptcy relief. Regulators often play a role in financial company bankruptcies. With the exception of CFTC and SEC, the Code does not explicitly name federal financial regulators as a party of interest with a right to be heard before the court. In practice, regulators frequently appear before the court in financial company bankruptcies. For example, as receiver of failed insured depository institutions, FDIC’s role in bankruptcies of bank holding companies is typically limited to that of creditor. CFTC has the express right to be heard and raise any issues in a case under Chapter 7. SEC has the same rights in a case under Chapter 11. SEC may become involved in a bankruptcy particularly if there are issues related to disclosure or the issuance of new securities. SEC and CFTC are, in particular, involved in Chapter 7 bankruptcies of broker-dealers and commodity brokers. In the event of a broker-dealer liquidation, pursuant to SIPA the bankruptcy court retains jurisdiction over the case and a trustee, selected by the Securities Investor Protection Corporation (SIPC), typically administers the case. SEC may participate in any SIPA proceeding as a party. The Code does not restrict the federal government from providing DIP financing to a firm in bankruptcy, and in certain cases it has provided such funding—for example, financing under the Troubled Asset Relief Program (TARP) in the bankruptcies of General Motors and Chrysler. The authority to make new financial commitments under TARP terminated on October 3, 2010. In July 2010, the Dodd-Frank Act amended section 13(3) of the Federal Reserve Act to prohibit the establishment of an emergency lending program or facility for the purpose of assisting a single and specific company to avoid bankruptcy. Nevertheless, the Federal Reserve may design emergency lending programs or facilities for the purpose of providing liquidity to the financial system. Although the automatic stay generally preserves assets and prevents creditors from taking company assets in payment of debts before a case is resolved and assets are systematically distributed, the stay is subject to exceptions, one of which can be particularly important in a financial institution bankruptcy. These exceptions—commonly referred to as the “safe harbor provisions”—pertain to certain financial and derivative contracts, often referred to as qualified financial contracts (QFC). The types of contracts eligible for the safe harbors are defined in the Code. They include derivative financial products, such as forward contracts and swap agreements that financial companies (and certain individuals and nonfinancial companies) use to hedge against losses from other transactions or speculate on the likelihood of future economic developments. Repurchase agreements, which are collateralized instruments that provide short-term financing for financial companies and others, also generally receive safe-harbor treatment. Under the safe-harbor provisions, most counterparties that entered into a qualifying transaction with the debtor may exercise certain contractual rights even if doing so otherwise would violate the automatic stay. In the event of insolvency or the commencement of bankruptcy proceedings, the nondefaulting party in a QFC may liquidate, terminate, or accelerate the contract, and may offset (net) any termination value, payment amount, or other transfer obligation arising under the contract when the debtor files for bankruptcy. That is, generally nondefaulting counterparties subtract what they owe the bankrupt counterparty from what that counterparty owes them (netting), often across multiple contracts. If the result is positive, the nondefaulting counterparties can sell any collateral they are holding to offset what the bankrupt entity owes them. If that does not fully settle what they are owed, the nondefaulting counterparties are treated as unsecured creditors in any final liquidation or reorganization. OLA gives FDIC the authority, subject to certain constraints, to resolve large financial companies, including a bank holding company or a nonbank financial company designated for supervision by the Federal Reserve, outside of the bankruptcy process. This regulatory resolution authority allows for FDIC to be appointed receiver for a financial company if the Secretary of the Treasury, in consultation with the President, determines, upon the recommendation of two-thirds of the Board of Governors of the Federal Reserve and (depending on the nature of the financial firm) FDIC, SEC, or the Director of the Federal Insurance Office, among other things, that the firm’s failure and its resolution under applicable law, including bankruptcy, would have serious adverse effects on U.S. financial stability and no viable private-sector alternative is available to prevent the default. In December 2013, FDIC released for public comment a notice detailing a proposed single-point-of-entry (SPOE) approach to resolving a systemically important financial institution under OLA. Under the SPOE approach, as outlined, FDIC would be appointed receiver of the top-tier U.S. parent holding company of a covered financial company determined to be in default or in danger of default pursuant to the appointment process set forth in the Dodd-Frank Act. Immediately after placing the parent holding company into receivership, FDIC would transfer assets (primarily the equity and investments in subsidiaries) from the receivership estate to a bridge financial company. By allowing FDIC to take control of the firm at the parent holding company level, this approach could allow subsidiaries (domestic and foreign) carrying out critical services to remain open and operating. In a SPOE resolution, at the parent holding company level, shareholders would be wiped out, and unsecured debt holders would have their claims written down to reflect any losses that shareholders cannot cover. The resolution of globally active large financial firms is often associated with complex international, legal, and operational challenges. The resolution of failed financial companies is subject to different national frameworks. During the recent financial crisis, these structural challenges led to government rescues or disorderly liquidations of systemic firms. Insolvency laws vary widely across countries. The legal authorities of some countries are not designed to resolve problems in financial groups operating through multiple legal entities that span borders. Some resolution authorities may not encourage cooperative solutions with foreign resolution authorities. Regulatory and legal regimes may conflict. Depositor preference, wholesale funding arrangements, derivatives, and repurchase agreements are often treated differently among countries when a firm enters bankruptcy. Some resolution authorities may lack the legal tools or authority to share information with relevant foreign authorities about the financial group as a whole or subsidiaries or branches. Country resolution authorities may have as their first responsibility the protection of domestic financial stability and minimization of any risk to public funds. For instance, if foreign authorities did not have full confidence that national and local interests would be protected, the assets of affiliates or branches of a U.S.-based financial institution chartered in other countries could be ring fenced or isolated and wound down separately under the insolvency laws of other countries thus complicating home-country resolution efforts. In 2005, the United States adopted Chapter 15 of the U.S. Bankruptcy Code. Chapter 15 is based on the Model Law on Cross-Border Insolvency of the United Nations Commission on International Trade Law (UNCITRAL). The model law is intended to promote coordination between courts in different countries during insolvencies and has been adopted in 21 jurisdictions. More than 450 Chapter 15 cases have been filed since its adoption, with more than half filed in the Southern District of New York and the District of Delaware. Among the stated objectives of Chapter 15 are promoting cooperation between U.S. and foreign parties involved in a cross-border insolvency case, providing for a fair process that protects all creditors, and facilitating the rescue of a distressed firm. In pursuit of these goals, Chapter 15 authorizes several types of coordination, including U.S. case trustees or other authorized entities operating in foreign countries on behalf of a U.S. bankruptcy estate; foreign representatives having direct access to U.S. courts, including the right to commence a proceeding or seek recognition of a foreign proceeding; and U.S. courts communicating information they deem important, coordinating the oversight of debtors’ activities, and coordinating proceedings. Chapter 15 excludes the same financial institutions that are generally not eligible to file as debtors under the Code (such as insured depository institutions and U.S. insurance companies), with the exception of foreign insurance companies. It also excludes broker-dealers that can be liquidated under SIPA or a special provision of Chapter 7 of the Code and commodity brokers that can be liquidated under a different special provision of Chapter 7. Based on the UNCITRAL model law, Chapter 15 contains a public policy exception that allows a U.S. court to refuse cooperation and coordination if doing so would be “manifestly contrary to the public policy of the United States.” Since we last reported on financial company bankruptcies in July 2013, no changes have been made to Chapters 7, 11, or 15 of the Bankruptcy Code relating to large financial companies, although two bills were introduced in the 113th Congress that would have attempted to address challenges associated with the reorganization of large financial firms as governed by Chapter 11 of the Code. Neither bill was signed into law nor re-introduced in the current Congress, as of March 12, 2015. The Taxpayer Protection and Responsible Resolution Act (S. 1861) was introduced in the Senate on December 19, 2013. The bill would have added a new chapter to the Code—”Chapter 14: Liquidation, Reorganization, or Recapitalization of a Covered Financial Corporation”— that would have generally applied to bank holding companies or corporations predominantly engaged in activities that the Federal Reserve Board has determined are financial in nature. Its provisions would have made changes to the role of regulators, changed the treatment of QFCs, and specifically designated judges to hear Chapter 14 cases, as the following examples illustrate. The proposal would have repealed the regulatory resolution regime in Title II of the Dodd-Frank Act—revoking FDIC’s role as a receiver of a failed or failing financial company under OLA—and returned all laws changed by Title II to their pre-Title II state. The proposal would have allowed the Federal Reserve Board to commence an involuntary bankruptcy and granted the Federal Reserve Board the right to be heard before the court. The proposal would have allowed the court to transfer assets of the estate to a bridge company (on request of the Federal Reserve Board or the trustee and after notice and hearing and not less than 24 hours after the start of the case). The court would have been able to order transfer of assets to a bridge company only under certain conditions (including that a preponderance of evidence indicated the transfer was necessary to prevent imminent substantial harm to U.S. financial stability). FDIC also would have been granted the right to be heard before the court on matters related to the transfer of property to the bridge company. However, this proposal would have explicitly prohibited the Federal Reserve Board from providing DIP financing to a company in bankruptcy or to a bridge company and provided no specific alternative non-market source of funding. The Taxpayer Protection and Responsible Resolution Act (S. 1861) also would have changed the treatment of QFCs in bankruptcy. The rights to liquidate, terminate, offset, or net QFCs would have been stayed for up to 48 hours after bankruptcy filing (or the approval of the petition from the Federal Reserve Board). During the stay, the trustee would have been able to perform all payment and delivery obligations under the QFC that became due after the case commenced. The stay would have been terminated if the trustee failed to perform any payment or delivery obligation. Furthermore, QFCs would not have been able to be transferred to the bridge company unless the bridge assumed all contracts with a counterparty. If transferred to the bridge company, the QFCs could not have been terminated or modified for certain reasons, including the fact that a bankruptcy filing occurred. Aside from the limited exceptions, QFC counterparties would have been free to exercise all of their pre-existing contractual rights, including termination. Finally, the Taxpayer Protection and Responsible Resolution Act (S. 1861) would have required the Chief Justice to designate no fewer than 10 bankruptcy judges with expertise in cases under Title 11 in which a financial institution is a debtor to be available to hear a Chapter 14 case. Additionally, the Chief Justice would have been required to designate at least one district judge from each circuit to hear bankruptcy appeals under Title 11 concerning a covered financial corporation. A second bankruptcy reform proposal, the Financial Institution Bankruptcy Act of 2014 (H.R. 5421), was passed by voice vote by the House of Representatives on December 1, 2014, and would have added a new Subchapter V under Chapter 11. Generally, the proposed subchapter would have applied to bank holding companies or corporations with $50 billion or greater in total assets and whose activities, along with its subsidiaries, are primarily financial in nature. The Financial Institution Bankruptcy Act (H.R. 5421) contained provisions similar or identical to those in the Taxpayer Protection and Responsible Resolution Act (S. 1861) that would have affected the role of regulators, treatment of QFCs, and designation of judges. For example, this proposal would have allowed an involuntary bankruptcy to be commenced by the Federal Reserve Board and allowed for the creation of a bridge company to which assets of the debtor holding company could be transferred. This proposal also would have granted the Federal Reserve Board and FDIC the right to be heard before the court, as well as the Office of the Comptroller of the Currency and SEC (which are not granted this right under the Taxpayer Protection and Responsible Resolution Act). The changes to the treatment of QFCs under this proposal were substantively similar to those under the Taxpayer Protection and Responsible Resolution Act (S. 1861). In addition, the Financial Institution Bankruptcy Act (H.R. 5421) would have required that the Chief Justice would designate no fewer than 10 bankruptcy judges to be available to hear a Subchapter V case. The Chief Justice also would have been required to designate not fewer than three judges of the court of appeals in not fewer than four circuits to serve on an appellate panel. Although the two bills have similarities, there are significant differences. For example, the Financial Institution Bankruptcy Act (H.R. 5421) would not have repealed Title II of the Dodd-Frank Act. Instead, Title II would have remained an alternative to resolving a firm under the Bankruptcy Code. Also, the Financial Institution Bankruptcy Act (H.R. 5421) would not have restricted the Federal Reserve Board from providing DIP financing to a financial firm under the proposed subchapter. Furthermore, the Financial Institution Bankruptcy Act (H.R. 5421) would have given the court broad power in the confirmation of the bankruptcy plan to consider the serious adverse effect that any decision in connection with Subchapter V might have on financial stability in the United States. By contrast, the Taxpayer Protection and Responsible Resolution Act (S. 1861) mentioned financial stability as a consideration in specific circumstances, such as whether the Federal Reserve Board could initiate an involuntary bankruptcy under Chapter 14, or whether the court could order a transfer of the debtor’s property to the bridge company. Certain provisions in these bills resembled those in OLA and may have facilitated a resolution strategy similar to FDIC’s SPOE strategy under OLA. For example, each of the bankruptcy reform bills and FDIC’s SPOE strategy under OLA would have allowed for the creation of a bridge company, in which assets, financial contracts, and some legal entities of the holding company would have been transferred, allowing certain subsidiaries to have maintained operations. In addition, OLA, like the bills, included a temporary stay for QFCs. OLA uses a regulatory approach to resolution, while the bankruptcy reform bills in the 113th Congress would have maintained a judicial approach to resolution. Some experts have expressed concern that a regulatory resolution may not adequately ensure the creditors’ rights to due process. For example, experts attending GAO’s 2013 bankruptcy reform roundtables noted that if preferences were given to some counterparties or creditors during a temporary stay, other counterparties or creditors would have the right to take action to recover value later in the process, as opposed to having a judge consider the views of all of the parties prior to making any decisions. However, as we reported in July 2013, other experts have stated that the judicial process of bankruptcy does not contemplate systemic risk, or have some of the tools available for minimizing the systemic risk associated with the failure of a systemically important financial institution. For example, to act quickly in cases involving large and complex financial companies, courts might need to shorten notice periods and limit parties’ right to be heard, which could compromise due process and creditor rights. In the United States, the judicial process under bankruptcy remains the presumptive method for resolving financial institutions, even those designated as systemically important. A third proposal would have more narrowly amended the Code. The 21st Century Glass-Steagall Act of 2013 (S. 1282 in the Senate and H.R. 3711 in the House) contained a provision that would have repealed all safe- harbor provisions for QFCs. This legislative proposal was neither signed into law nor re-introduced in the current Congress, as of March 12, 2015. Some experts have identified the safe-harbor treatment of QFCs under the Code as a challenge to an orderly resolution in bankruptcy. For example, safe-harbor treatment can create significant losses to the debtor’s estate, particularly for financial institution debtors that often are principal users of these financial products. As we previously reported in July 2011, some experts we interviewed suggested that modifying the safe harbor provisions might help to avoid or mitigate the precipitous decline of the asset values typical in financial institution bankruptcies. For example, these experts suggested that the treatment of QFCs in the Lehman bankruptcy contributed to a significant and rapid loss of asset values to the estate. Other experts we spoke with in 2011 suggested that safe-harbor treatment might lessen market discipline. Because counterparties entered into QFCs may close out their contracts even if doing so would otherwise violate the automatic stay, the incentive to monitor the risk of each other could be reduced. Additionally, as we reported in July 2013, attendees of our roundtable discussions on bankruptcy reform noted that the safe harbors lead to a larger derivatives market and greater reliance on short-term funding because QFCs would not be subject to a stay, which could increase systemic risk in the financial system. However, others argue that a repeal of the safe-harbor provisions could have adverse effects. As we previously reported in July 2011, these experts assert that subjecting any QFCs to the automatic stay in bankruptcy would freeze many assets of the counterparties of the failed financial institution, causing a chain reaction and a subsequent systemic financial crisis. In January 2011, regulatory officials we spoke with also told us that the safe harbor provisions uphold market discipline through margin, capital, and collateral requirements. They said that the requirement for posting collateral limits the amount of risk counterparties are willing to undertake. In addition, during the 2013 expert roundtable on financial company bankruptcies, one expert noted that one of the goals of safe harbors is to limit market turmoil during a bankruptcy—that is, they are to prevent the insolvency of one firm from spreading to other firms. In the United States the presumptive mechanism to resolve a failed cross- border large financial company continues to be through the judicial bankruptcy process, though no statutory changes have been made to Chapter 15 of the Code or the U.S. judicial bankruptcy process to address impediments to an orderly resolution of a large, multinational financial institution. However, while some structural challenges discussed earlier remain, others, such as conflicting regulatory regimes and the treatment of cross-border derivatives, are being addressed through various efforts. For example, the Federal Reserve and FDIC have taken certain regulatory actions mandated by the Dodd-Frank Act authorities toward facilitating orderly resolution, including efforts that could contribute to cross-border coordination. Specifically, certain large financial companies must provide the Federal Reserve and FDIC with periodic reports of their plans for rapid and orderly resolution in the event of material financial distress or failure under the Code. The resolution plans or living wills are to demonstrate how a company could be resolved in a rapid manner under the Code. FDIC and the Federal Reserve have said that the plans were expected to address potential obstacles to global cooperation, among others. In 2014, FDIC and the Federal Reserve sent letters to a number of large financial companies identifying specific shortcomings with the resolution plans that those firms will need to address in their 2015 submissions, due on or before July 1, 2015, for the first group of filers. International bodies have also focused on strengthening their regulatory structures to enable the orderly resolution of a failing large financial firm and have taken additional actions to facilitate cross-border resolutions. In October 2011, the Financial Stability Board (FSB)—an international body that monitors and makes recommendations about the global financial system—issued a set of principles to guide the development of resolution regimes for financial firms active in multiple countries. For example, each jurisdiction should have the authority to exercise resolution powers over firms, jurisdictions should have policies in place so that authorities are not reliant on public bailout funds, and statutory mandates should encourage a cooperative solution with foreign authorities. In addition, in December 2013 the European Parliament and European Council reached agreement on the European Union’s (EU) Bank Recovery and Resolution Directive, which establishes requirements for national resolution frameworks for all EU member states and provides for resolution powers and tools. For example, member states are to appoint a resolution authority, institutions must prepare and maintain recovery plans, resolution authorities are to assess the extent to which firms are resolvable without the assumption of extraordinary financial support, and authorities are to cooperate effectively when dealing with the failure of cross-border banks. Unlike the United States, EU and FSB do not direct resolution authorities to use the bankruptcy process developed for corporate insolvency situations. In a letter to the International Swaps and Derivatives Association (ISDA) in 2013, FDIC, the Bank of England, BaFin in Germany, and the Swiss Financial Market Supervisory Authority called for changes in the exercise of termination rights and other remedies in derivatives contracts following commencement of an insolvency or resolution action. In October 2014, 18 major global financial firms agreed to sign a new ISDA Resolution Stay Protocol to facilitate the cross-border resolution of a large, complex institution. This protocol was published and these 18 financial firms agreed to it on November 12, 2014, and certain provisions of which became effective in January 2015. Generally, parties adhering to this protocol have agreed to be bound by certain limitations on their termination rights and other remedies in the event one of them becomes subject to certain resolution proceedings, including OLA. These stays are intended to give resolution authorities and insolvency administrators time to facilitate an orderly resolution of a troubled financial firm. The Protocol also incorporates certain restrictions on creditor contractual rights that would apply when a U.S. financial holding company becomes subject to U.S. bankruptcy proceedings, including a stay on cross-default rights that would restrict the counterparty of a non-bankrupt affiliate of an insolvent U.S. financial holding company from immediately terminating its derivatives contracts with that affiliate. Finally, a United Nations working group (tasked with furthering adoption of the UNCITRAL Model Law) included the insolvency of large and complex financial institutions as part of its focus on cross-border insolvency. In 2010, Switzerland proposed that the working group study the feasibility of developing an international instrument for the cross- border resolution of large and complex financial institutions. The working group has acknowledged and has been monitoring the work undertaken by FSB, Basel Committee on Banking Supervision, the International Monetary Fund, and EU. We provided a draft of this report to AOUSC, CFTC, Departments of Justice and the Treasury, FDIC, Federal Reserve, and SEC for review and comment. The agencies did not provide written comments. We received technical comments from the Department of the Treasury, FDIC, Federal Reserve, and SEC, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, Director of the Administrative Office of the U.S. Courts, the Chairman of the Commodity Futures Trading Commission, Attorney General, the Secretary of the Treasury, the Chairman of the Federal Deposit Insurance Corporation, the Director of the Federal Judicial Center, the Chair of the Board of Governors of the Federal Reserve System, the Chair of the Securities and Exchange Commission, and other interested parties. The report also is available at no charge on the GAO web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Cindy Brown Barnes at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. In our July 2011 and July 2012 reports on the bankruptcy of financial companies, we reported on the status of the bankruptcy proceedings of, among other financial companies, Lehman Brothers Holdings Inc., MF Global, and Washington Mutual. In the 2011 report, we found that comprehensive data on the number of financial companies in bankruptcies were not readily available. We collected information to update the status of the bankruptcy proceedings for Lehman Brothers Holdings Inc., MF Global, and Washington Mutual. Since we last reported in July 2012, in each case, additional payments to creditors have been distributed and litigation with various parties is ongoing. Lehman Brothers Holdings Inc. (Lehman) was an investment banking institution that offered equity, fixed-income, trading, asset management, and other financial services. In 2008, Lehman was the fourth largest U.S. investment bank and had been in operation since 1850. It had 209 registered subsidiaries in 21 countries. On September 15, 2008, Lehman filed Chapter 11 cases in the U.S. Bankruptcy Court. Its affiliates filed for bankruptcy over subsequent months. Some of Lehman’s affiliates also filed bankruptcy or insolvency proceedings in foreign jurisdictions. There are three different legal proceedings involving (1) the holding company or LBHI, (2) the U.S. broker dealer or LBI, and (3) the U.K. broker dealer or LBIE. On September 19, 2008, Lehman’s broker-dealer was placed into liquidation under the Securities Investor Protection Act (SIPA). The bankruptcy court approved the sale of LBI’s assets to Barclays PLC on September 20, 2008—5 days after the filing of the LBHI Chapter 11 case. In March 2010, LBHI debtors filed their proposed Chapter 11 plan. In December 2010, a group of senior creditors filed an alternative plan. Since then, various plan amendments and counter plans were filed. In December 2011, the U.S. Bankruptcy Court for the Southern District of New York confirmed a reorganization plan for LBHI and the plan took effect in March 2012. LBHI had more than 100,000 creditors. As of October 2, 2014, some $8.6 billion had been distributed to LBHI creditors in the nonpriority unsecured claims class. The Trustee of LBI has distributed more than $106 billion to 111,000 customers. As of September 2014, £34 billion has been distributed by the LBIE Administrator to counterparties in the House Estate (general unsecured estate) and the Trust Estate (Client Assets, Client Money and Omnibus Trust). In February 2015, the bankruptcy court approved a second interim distribution of $2.2 billion to general unsecured creditors with allowed claims. This would bring the total distributions to allowed general unsecured creditors to approximately 27 percent. There is ongoing litigation involving a breach of a swap with Giants Stadium, the payment of creditor committee members’ legal fees, and transactions with foreign entities, according to an official of the U.S. Trustees Program. Litigation concerning issues surrounding the sale of LBI assets to Barclays PLC also continues. On December 15, 2014, the SIPA Trustee filed a petition for a writ of certiorari with the U.S. Supreme Court seeking review of the lower court rulings that awarded $4 billion of margin cash assets to Barclay’s. MF Global Holdings Ltd. (MFGH) was one of the world’s leading brokers in markets for commodities and listed derivatives. The firm was based in the United States and had operations in Australia, Canada, Hong Kong, India, Japan, Singapore, and the U.K. On October 31, 2011, MFGH and one of its affiliates filed Chapter 11 cases in the U.S. Bankruptcy Court for the Southern District of New York. In the months following four other affiliates filed for relief in Bankruptcy Court. Also, on October 31, 2011, the Securities Investor Protection Corporation (SIPC) commenced a SIPA case against MF Global’s broker-dealer subsidiary (MFGI). The SIPA trustee has been liquidating the firm’s assets and distributing payments to its customers on a rolling basis pursuant to a claims resolution procedure approved by the bankruptcy court overseeing the case. MFGI was required to pay $1.2 billion in restitution to its customers as well as a $100 million penalty. In December 2014, CFTC obtained a federal court consent order against MFGH requiring it to pay $1.2 billion or the amount necessary in restitution to ensure the claims of MFGI are paid in full. The bankruptcy court confirmed a liquidation plan for MFGH on April 22, 2013, which became effective in June 2013. As of the end of 2013, the SIPA trustee reported the probability of a 100 percent recovery of allowed net equity claims for all commodities and securities customers of MFGI. As of mid-December 2014, 100 percent of the distributions through the SIPA trustee have been completed to substantially all categories of commodities and securities customers and 39 percent of the first interim distribution on allowed unsecured claims. The trustee started to make $551 million in distributions to general creditors on October 30, 2014. An interim payment of $518.7 million went to unsecured general claimants and covered 39 percent of their allowed claims. A reserve fund of $289.8 million was to be held for unresolved unsecured claims and a reserve fund of $9.9 million will be held for unresolved priority claims. In April 2014, the SIPA trustee began final distributions to all public customers. With this distribution a total of $6.7 billion was to have been returned to over 26,000 securities and commodities futures customers. General creditor claims totaling more than $23 billion in asserted amounts, as substantial unliquidated claims, were filed in this proceeding as of the end of June 2014. As of December 2014, the SIPA trustee reports that of 7,687 general creditor claims asserted or reclassified from customer status, only 23 claims remain unresolved. Current litigation surrounds a malpractice complaint against PricewaterhouseCoopers (the company’s former auditor) and an investigation of the officers, according to an official of the U.S. Trustees Program. Washington Mutual Inc. was a thrift holding company that had 133 subsidiaries. Its subsidiary Washington Mutual Bank was the largest savings and loan association in the United State prior to its failure. In the 9 days prior to receivership by the Federal Deposit Insurance Corporation (FDIC), there were more than $16.7 billion in depositor withdrawals. At the time of its filing, Washington Mutual had about $32.9 billion in total assets and total debt of about $8.1 billion. Its failure was the largest bank failure in U.S. history. On September 25, 2008, the Office of Thrift Supervision found Washington Mutual Bank to be unsafe and unsound, closed the bank, and appointed FDIC as the receiver. FDIC as receiver then took possession of the bank’s assets and liabilities and transferred substantially all the assets and liabilities to JPMorgan Chase for $1.9 billion. On September 26, 2008, Washington Mutual and its subsidiary WMI Investment Corporation filed Chapter 11 cases in U.S. Bankruptcy Court for the District of Delaware. On March 12, 2010, Washington Mutual, FDIC, and JPMorgan Chase announced that they had reached a settlement on disputed property and claims. This was called the global settlement. On July 28, 2010, the bankruptcy court approved the appointment of an examiner, selected by the U.S. Trustee’s office, to investigate the claims of various parties addressed by the global settlement. The seventh amended plan was confirmed by the court on February 24, 2012. The plan established a liquidating trust—the Washington Mutual Liquidating Trust (WMILT)—to make subsequent distributions to creditors on account of their allowed claims. Upon the effective date of the plan, Washington Mutual became a newly reorganized company, WMI Holdings Corp. consisting primarily of its subsidiary WMI Mortgage Reinsurance Company, Inc. In 2012, there was an initial distribution of $6.5 billion. Since that initial distribution, an additional $660 million has been distributed to creditors, according to officials at the U.S. Trustees Program, including a distribution of $78.4 million paid on August 1, 2014. In August 2013, WMILT, pursuant to an order by the U.S. Bankruptcy Court for the District of Delaware, filed a declaratory judgment in the U.S. District Court for the Western District of Washington against FDIC, the Board of Governors of the Federal Reserve System (Federal Reserve), and 90 former employees who were also claimants in the bankruptcy proceeding. Certain employee claimants have asserted cross-claims against FDIC and the Federal Reserve, contending that the banking agencies are without authority to assert limits on payment from troubled institutions that are contingent on termination of a person’s employment over WMILT, because WMILT is a liquidating trust. After the case was transferred to the U.S. Bankruptcy Court for the District of Delaware in July 2014 and all pending motions terminated, most of the parties stipulated to withdraw the reference to the bankruptcy court. FDIC moved to dismiss the complaint on September 5, 2014. The proposed order to withdraw the reference and the briefing on the motion to dismiss remains pending. Section 202(e) of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd Frank Act) mandated that we report on the orderliness and efficiency of financial company bankruptcies every year for 3 years after passage of the act, in the fifth year, and every 5 years thereafter. This report, the fourth in the series, examines (1) recent changes to the U.S. Bankruptcy Code (Code) and (2) efforts to improve cross-border coordination to facilitate the liquidation and reorganization of failed large financial companies under bankruptcy. For each of our objectives, we reviewed relevant regulations and laws, including the Code and the Dodd-Frank Act as well as GAO reports that addressed bankruptcy issues and financial institution failures. We specifically reviewed the reports we issued during the first 3 years of the mandate as well as reports written under the same or similar mandates by the Administrative Office of the United States Courts (AOUSC) and the Board of Governors of the Federal Reserve System (Federal Reserve). We interviewed officials from the following federal agencies due to their role in financial regulation and bankruptcy proceedings: AOUSC; the Commodity Futures Trading Commission (CFTC); Federal Deposit Insurance Corporation (FDIC); Department of Justice; Department of the Treasury (Treasury), including officials who support the Financial Stability Oversight Council (FSOC); Federal Reserve; and Securities and Exchange Commission (SEC). We also updated our review of published economic and legal research on the financial company bankruptcies that we had originally completed during the first year of the mandate (see appendix I). For the original search, we relied on Internet search databases (including EconLit and Proquest) to identify studies published or issued after 2000 through 2010. To address our first objective, we reviewed Chapters 7, 11, or 15 of the Bankruptcy Code for any changes. In addition, we reviewed legislation proposed in the 113th Congress that would change the Code for financial company bankruptcies. We also reviewed academic literature on financial company bankruptcies and regulatory resolution, transcripts of congressional hearings on bankruptcy reform, and transcripts from expert roundtables on bankruptcy reform that were hosted by GAO in 2013. To address our second objective, we reviewed Chapter 15 of the Bankruptcy Code, which relates to coordination between U.S. and foreign jurisdictions in bankruptcy cases in which the debtor is a company with foreign operations, for any changes. In addition, we sought information on U.S. and international efforts to improve coordination of cross-border resolutions from the federal agencies we interviewed. We also reviewed and analyzed documentary information from the Bank of England, Basel Committee on Banking Supervision, European Union, the Financial Stability Board, BaFin in Germany, International Monetary Fund, Swiss Financial Market Supervisory Authority, and the United Nations. To update the three bankruptcy cases of Lehman Brothers Holdings, Inc.; MF Global Holdings, Ltd.; and Washington Mutual, Inc. discussed in our July 2011 and July 2012 reports, we sought available information—for example, trustee reports and reorganization plans—on these cases from CFTC, FDIC, Federal Reserve, and SEC; AOUSC, the Department of Justice, and Treasury. In addition, we collected information from prior GAO reports, bankruptcy court documents, and the trustees in each case. To determine whether there were new bankruptcy filings of large financial companies such as those in our case studies, we inquired of AOUSC, CFTC, FDIC, Department of Justice, Treasury, Federal Reserve, and SEC. We also conducted a literature review, which did not show evidence of any new bankruptcy cases filed by large financial companies. We conducted this performance audit from June 2014 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Karen Tremba, Assistant Director; Nancy S. Barry; Patrick Dynes; Risto Laboski; Marc Molino; Barbara Roesmann; Jessica Sandler; and Jason Wildhagen made key contributions to this report. Technical assistance was provided by JoAnna Berry.
The challenges associated with the bankruptcies of large financial companies during the 2007-2009 financial crisis raised questions about the effectiveness of the U.S. Bankruptcy Code and international coordination for resolving complex financial institutions with cross-border activities. The Dodd-Frank Act mandates that GAO report on an ongoing basis on ways to make the U.S. Bankruptcy Code more effective in resolving certain failed financial companies. GAO has issued three reports on this issue. This fourth report addresses (1) recent changes to the U.S. Bankruptcy Code and (2) efforts to improve cross-border coordination to facilitate the liquidation or reorganization of failed large financial companies under bankruptcy. GAO reviewed laws, court documents, regulations, prior GAO reports, and academic literature on financial company bankruptcies and regulatory resolution. GAO also reviewed documentation from foreign financial regulators and international bodies such as the Financial Stability Board. GAO interviewed officials from the Administrative Office of the United States Courts, Department of Justice, Department of the Treasury, and financial regulators with a role in bankruptcy proceedings. GAO makes no recommendations in this report. The Department of the Treasury, Federal Reserve, FDIC, and the Securities and Exchange Commission provided technical comments on a draft of the report that GAO incorporated as appropriate. The U.S. Bankruptcy Code (Code) chapters dealing with the liquidation or reorganization of a financial company have not been changed since GAO last reported on financial company bankruptcies in July 2013. However, bills introduced in the previous Congress would, if re-introduced and passed, make broad changes to the Code relevant to financial company bankruptcies. The Financial Institution Bankruptcy Act of 2014 (H.R. 5421) and Taxpayer Protection and Responsible Resolution Act (S.1861) would have expanded to varying degrees the powers of the Board of Governors of the Federal Reserve System (Federal Reserve) and Federal Deposit Insurance Corporation (FDIC) and would have imposed a temporary stay on financial derivatives (securities whose value is based on one or more underlying assets) that are exempt from the automatic stay under the Code. That stay would prohibit a creditor from seizing or taking other action to collect what the creditor is owed under the financial derivative. The bills also would have added to the Code processes for the resolution of large, complex financial companies similar in some ways to provisions currently in the Orderly Liquidation Authority (OLA) in Title II of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), which grants FDIC the authority to resolve failed systemically important financial institutions under its receivership. For example, each bill would have allowed for the creation of a bridge company, in which certain assets and financial contracts of the holding-company would be transferred, allowing certain subsidiaries to continue their operations. The 21st Century Glass-Steagall Act of 2013—a bill introduced in the House of Representatives (H.R. 3711) and the Senate (S. 1282)—would have repealed safe-harbor provisions that allow most counterparties in a qualifying transaction with the debtor to exercise certain contractual rights even if doing so would otherwise violate the automatic stay. As of March 12, 2015, these legislative proposals had not been re-introduced in Congress. In the United States, the presumptive mechanism to resolve a failed large financial company with cross-border operations is through the judicial bankruptcy process. Since GAO's 2013 report, no changes have been made to the chapter of the Code that relates to coordination between U.S. and foreign jurisdictions in bankruptcy cases in which the debtor has foreign operations. Some structural challenges remain, such as conflicting regulatory regimes related to the treatment of financial contracts between parties in different countries when a firm enters bankruptcy, but efforts are underway to address them. Regulators have implemented a Dodd-Frank Act provision that requires certain large financial firms to submit a resolution plan to assist with an orderly bankruptcy process, which regulators expect to help address potential problems with international cooperation, among others. However, in 2014, FDIC and the Federal Reserve identified shortcomings with the plans for a number of large financial companies that those firms are to address in their 2015 submissions. Further, international bodies, such as the Financial Stability Board—an international body that monitors and makes recommendations about the global financial system—have focused on having countries adopt a regulatory approach to resolutions. Other recent actions include a January 2015 stay protocol for derivatives contracts developed by the International Swaps and Derivatives Association that is intended to give regulators time to facilitate an orderly resolution of a troubled firm.
WIA requires states and local areas to bring together a number of federally funded employment and training programs into a comprehensive workforce investment system, the American Job Center network. These programs—including the Adult and Dislocated Worker Programs—are known as mandatory partners, and must provide services through this network (see tab. 1). The WIA Adult and Dislocated Worker Programs are designed to provide quality employment and training services to assist eligible individuals to find and qualify for employment and to help employers find the skilled workers they need. The Adult Program provides services to individuals over the age of 18 who are job seekers, although states and local areas must give priority of service to low-income individuals if funds are determined to be limited. The Dislocated Worker Program provides services to workers who have been or will be terminated or laid off from employment. For fiscal year 2013, Congress appropriated over $1.9 billion for the Adult and Dislocated Worker Programs: $730 million for the Adult Program and $1.2 billion for the Dislocated Worker Program. DOL’s Employment and Training Administration administers the WIA Adult and Dislocated Worker Programs and oversees their implementation, which is carried out by states and local areas. Each state must have one or more designated local workforce investment area, and each local area must have at least one comprehensive American Job Center where job seekers can receive core services and access other programs and activities offered by the mandatory partners. Although each local area must have one comprehensive center, under WIA, the mandatory partners have flexibility in the way they provide services through the American Job Center network, and can co-locate services on site or make referrals to external service providers or training, including to local community colleges. WIA provides three tiers, or levels, of service for adult and dislocated workers: (1) core, (2) intensive, and (3) training. Core services include basic services, such as job search or résumé- building assistance, and may be accessed with or without staff assistance. Intensive services include activities such as staff-assisted comprehensive assessment of a participant’s skill levels and case management. 29 U.S.C. §§ 2831 and 2864(c)(2). Training services include activities such as occupational skills or on- the-job training (see fig. 1). Service at one level and a determination that a participant is unable to obtain employment through that service are prerequisites for service at the next level, although WIA does not specify the amount of time an individual must spend or the number of attempts that must be made to gain employment before moving to the next level. Job seekers who receive only core services that are self-service and informational in nature are not counted in the programs’ performance measures, but DOL requires that states count such individuals as program participants. Self- service and informational activities can be accessed either at an American Job Center or remotely, such as when a job seeker searches for employment over an internet connection to an American Job Center from a home computer. As part of its oversight, DOL collects program data from states, which are used to assess how well the Adult and Dislocated Worker Programs are working. States must submit quarterly and annual performance reports to DOL, in addition to uploading individual records on a quarterly basis to DOL’s national WIASRD database. Specifically, states must submit quarterly and supplemental monthly performance reports, a validated annual performance report at the end of each year, and quarterly WIASRD files for each reporting quarter for a program year. The WIASRD files include demographic and characteristic information for participants in the WIA Adult and Dislocated Worker Programs and information about services received through these WIA programs, as well as through some other partner programs. The process of collecting and reporting WIA data involves all three levels of government. More specifically, participant data are typically collected by staff at American Job Centers and entered into a state or local information system. In some states, local staff may enter data directly into a statewide information system; in other states, local areas may use their own individualized information systems to enter data from which they then must extract and compile for submission to the state. After the data is submitted to the state agency, it is compiled and formatted for the various submissions to DOL, including for quarterly WIASRD record layout submissions. (see fig. 2). DOL’s guidance to states, in the form of Training and Employment Guidance Letters and Training and Employment Notices, details how states should collect and report data on participants in the WIA Adult and Dislocated Worker Programs. However, the flexibility in the guidance DOL provides to states makes it difficult for DOL to provide consistent national data on participants in these programs. The flexibility in the guidance stems from the flexibility inherent in WIA, which allows states and local areas to tailor service delivery to their needs. Consequently, DOL’s guidance is designed to accommodate the different ways that states and local areas can deliver services through the American Job Center network, including decisions about whether to enroll participants in partner programs. This flexibility, however, results in variations in how states and local areas report participant data, which makes it challenging for DOL to aggregate WIA data at the national level. In particular, it has created inconsistencies among states in when a job seeker is counted as a participant in the WIA Adult or Dislocated Worker Program. In addition, DOL’s guidance is open to interpretation, allowing states to define and report some variables differently, further contributing to inconsistencies in the data that states report to DOL on these programs. For example, while DOL requires states to report the type of training service provided to WIA participants who receive such services, the agency’s guidance only lists the six broad categories of training services states must report without defining or describing them in detail. In accordance with government internal control standards, management is responsible for developing policies and procedures to achieve program objectives and clearly communicating these policies and procedures to facilitate understanding and consistent implementation. The flexibility in DOL’s guidance on when states and local areas should count individuals as participants in the WIA Adult and Dislocated Worker Programs has resulted in inconsistencies in the data reported on program participants. WIA allows for flexibility in the extent to which Adult and Dislocated Worker Program services are integrated with those from other programs. integrate WIA services with those from other partner programs so that job seekers have access to a coordinated system of employment and training services. As a result, states’ service delivery models differ in the extent of integration between WIA services and other programs. For example, state officials that we interviewed told us that they funded core services at their American Job Centers exclusively through WIA, exclusively through a partner program, or through a blend of both WIA and partner program funds. According to officials from DOL’s national office, states and local areas are best positioned to determine the mix of services that will meet the needs of their job seekers. However, depending on which service delivery model a state or local area selects, job seekers receiving the same types of services may be counted as WIA participants at different points in time, or they may never be counted under the WIA Adult or Dislocated Worker Program. Used in this context, integration can refer to the co-location of services at an American Job Center, or to integrated funding to provide any given service. lower populations. For example, according to the state officials we interviewed, Utah’s American Job Centers integrate WIA Adult Program funding with funding for the Wagner-Peyser Program to provide core services. As a result, every job seeker aged 18 years or older who receives core services in Utah is counted as a participant in both programs, as permitted under DOL’s guidance. In comparison, state officials in California told us that the state typically funds core services exclusively through the Wagner-Peyser Program and therefore counts all individuals accessing core services from an American Job Center as participants in the Wagner-Peyser Program, but not as participants in the WIA Adult Program or the Dislocated Worker Program. Therefore, only job seekers who meet the eligibility criteria for the WIA Adult or Dislocated Worker Program and receive intensive or training services funded by those programs are counted as WIA participants. Because of this variability, the total counts of WIA Adult Program participants and WIA Dislocated Worker Program participants represent different populations of job seekers in different states, depending on the service delivery model the state uses. As a result of these differences, Utah ranked 3rd out of 53 states in the total number of participants it served in the WIA Adult Program in program year 2011, even though it ranked 36th in overall population.the same time, California ranked 27th in the total number of participants served in the WIA Adult Program, even though it was the most populous state (see fig. 3). To improve the quality of data on participants in the WIA Adult or Dislocated Worker Programs, DOL has enhanced its oversight efforts and introduced new initiatives including having its data contractor produce quarterly reports on data issues, requiring states to validate their WIA data on an annual basis, and engaging its regional offices in periodic reviews of case files from states. However, the agency has not established a process to review the results of oversight to identify and resolve systemic issues with the quality of participant data from the WIA Adult and Dislocated Worker Programs. Our past work has shown that the benefit of collecting performance information is only fully realized when this information is actually used by managers to make decisions oriented toward improving results. DOL has a contract with SPRA to correct and analyze WIASRD data and provide data files and reports on the accuracy of the data reported by states that are made available to the public via DOL’s website. corrections SPRA makes to the state WIASRD data files may not be accurate, and may result in incorrect data for a state. The publicly available data files that SPRA produces are released quarterly and include “data issues reports,” which identify issues or anomalies in the WIA data submitted by each state to DOL. Officials from DOL’s national office stated that the original intent of SPRA’s analysis was to create the publicly available data files and not to report on data anomalies or errors, even though SPRA has always identified issues with the data while creating the publicly available files. However, since about 2010, when states started submitting quarterly WIASRD files instead of annual ones, reviewing the quality of the data and issuing the quarterly “data issues and anomalies report” has become standard practice under SPRA’s contract, according to officials from DOL’s national office. The agency’s regional offices are supposed to provide the states with the published quarterly error reports and ask them to update any errors prior to their next quarterly WIASRD submission. DOL publishes SPRA’s data files and reports on its website, and its regional offices are expected to share these reports with their states so that the states can correct any data issues in their subsequent quarterly submissions. However, officials from DOL’s Region 5 said that although they receive these reports from DOL’s national office, they have not had the chance to review them or comment on any of the errors for states in their region. In addition, officials from DOL’s Region 1 said that although these reports have always been available on DOL’s website, they only recently began receiving the reports in a user friendly format and that they only recently began providing feedback to SPRA on issues identified for the states in their region. Our analysis of SPRA’s data issues reports suggests that some states may not be using SPRA’s reports to improve the quality of their data on WIA participants, which could be due to a lack of awareness of these reports. For example, SPRA’s data issue reports for the fourth quarters of both program years 2010 and 2011 identified some of the same issues, such as errors in the dates of service reported by the states. This suggests that the states that made these errors in 2010 may not have reviewed these reports and used them to correct the data reported in the following year. In addition, while officials from four states were aware of the SPRA reports, officials from four other states told us they either were not aware of these reports or that they began receiving these reports from the regions only recently, beginning with the final report for program year 2011. When asked, officials from DOL’s national office stated that they do not have a plan to systematically identify or address recurring errors noted in SPRA’s reports. Government internal control standards note that, for oversight and monitoring to be effective, information should be recorded and communicated to management and others within the entity and external stakeholders, and this should be done within a time frame that enables them to carry out their internal control and other responsibilities. Officials added that they do not conduct formal oversight reviews or audits of SPRA’s data analyses because they consider SPRA to be the “data experts” and, therefore, do not know what kind of oversight they could provide. According to government standards for internal controls, agencies should ensure that ongoing monitoring occurs in the course of normal operations, which would include monitoring and oversight of contractors through regular management and supervisory activities. These officials also said they believe any major data issues or obstacles would be uncovered by their internal data edit checks, which are run on all WIASARD data submitted by the states prior to SPRA’s review of the data. GAO/AIMD-00-21.3.1. correcting the data. For example, officials from California explained that SPRA changed some of the numbers entered by local areas for WIASRD variable 325—Employment and Training Programs Related to Food Stamps—to zeros because the numbers entered seemed too high. According to the state officials, the numbers for that variable entered by the local areas were correct. States, however, are generally not provided an opportunity to review and verify SPRA’s changes before they are made, as they only receive copies of the data issues reports after they are published. Officials from both DOL’s national office and from some of the state workforce agencies noted that not all issues identified by SPRA represent actual errors and that some outliers on certain variables are acceptable. DOL requires each state to validate the data it collects and report on participants in the WIA Adult and Dislocated Worker Programs on an annual basis, but the findings from these validation efforts have not been strategically used to identify systemic issues with or to improve the quality of the data on WIA participants. Government standards for internal controls state that for oversight and monitoring to be effective, information should be communicated to management and others, along with the use of this information for program assessment, so that it can be used to carry out their internal control and other responsibilities. Each year, states are required to review a sample of WIA participant records to determine whether the source documents match the information in the electronic records states use to collect and report data to DOL in WIASRD. DOL requires states to validate the accuracy of the data they submit annually to ensure that decisions about WIA policy and funding are made based on a true picture of the number of participants and program outcomes. Although DOL has established a provisional 5 percent error rate threshold for states’ validation of the variables, the agency does not have plans to tie the results of these validation efforts to DOL’s financial awards or penalties because, according to officials from DOL’s national office, the results of states’ validation efforts were never intended to be used for enforcement purposes. Officials from DOL’s national office explained that, in their opinion, DOL’s regional offices should be using the states’ validation efforts as a management tool to improve the quality of their data by identifying inaccurate or confusing variables to target the technical assistance they provide to states and local areas. DOL, however, does not know what effect state data validation efforts have had on the quality of participant data for the WIA Adult and Dislocated Worker Programs. In addition, our interviews with regional and state officials suggest that DOL’s regions and the states are not always using the results of these data validation efforts to improve data quality or target technical assistance. During our interviews with regional and state officials, only those from Region 3 and Massachusetts described specific efforts to use the results of the state data validation efforts to improve data quality and direct technical assistance. Officials from DOL Region 3 noted that the region recently began requiring states to respond to findings from states’ annual data validation efforts and to track error rates found in each quarterly submission. Similarly, officials from Massachusetts said that they use the results of their data validation efforts to direct the technical assistance the state provides to local areas and to improve the quality of the state’s WIA data. In contrast, officials from one state said that, although they receive information about errors associated with specific case files as they enter data from each file into DOL’s data validation system, they do not know how to retrieve their state-wide results from DOL’s database and they have not received any reports from DOL documenting the nationwide results of the data validation efforts. Officials from DOL’s national office said that they were surprised by this, and that data element validation results are available to states through DOL’s reporting system. Moreover, our analyses of the results of DOL’s efforts to validate WIASRD variables for program years 2010 and 2011 suggest that DOL’s data validation efforts have not prevented high error rates on certain data elements for select variables—nationally, error rates for certain variables have remained well above the 5 percent threshold over both program years. For example, in program year 2010, across all states, about 16 percent of the files for the Adult Program for which the “date of program exit” variable was reviewed had errors, compared to around 14 percent in program year 2011. Similarly, the nationwide error rate for “date of first staff-assisted core service” was above 7 percent in both program years for both the Adult and the Dislocated Worker Programs. When asked, officials from DOL’s national office and two regional offices explained that variables containing dates frequently have high error rates due to discrepancies between the date reported and the date in the source documentation. If a date, such as date of dislocation—the date a worker lost his or her job—in the hard copy document differs from the date in the electronic record, even by one day, the variable for that record “fails” the validation check. Moreover, officials from DOL’s national office and three regional offices stated that high error rates resulting from such discrepancies do not necessarily reflect any serious issues with the reported data—a participant would still be a dislocated worker whether, for example, the date of dislocation was June 20th or June 21st. Nonetheless, DOL requires these dates to match precisely and errors noted in program year 2011 for these variables were still prevalent although similar errors were noted the previous year. DOL’s required annual data validation efforts are resource-intensive and time-consuming both for DOL regions and states, according to officials from DOL’s national office, two DOL regional offices, and five states. However, DOL has not yet evaluated the process or determined its effect on data quality. Specifically, an official from one state workforce agency estimated the cost of its annual data validation efforts, including staff time, travel, and other expenses, to be about $200,000. Officials in another state explained that they would like DOL to reduce the required sample size for the required validation of the data elements in WIASRD in order to reduce the administrative burden on states. In addition, in 2011 one DOL regional office convened a workgroup of representatives from four states that analyzed the data validation procedures and provided recommendations to DOL’s national office for improvements to reduce the administrative burden on states. These recommendations included considering using alternative sampling methods, revisiting the frequency and precision requirements of data validation, and issuing guidance to share “best practices” across states and local areas. However, as of September 2013, DOL had not implemented the workgroup’s recommendations, and the data validation process remains unchanged. Officials from DOL’s national office acknowledged the trade-off between monitoring data quality and minimizing the administrative burden on states. They said that revising WIASRD’s edit checks to allow states more flexibility may result in making the data validation process more efficient by, for example, permitting states to report a range of dates, if appropriate, for certain variables. DOL’s regional offices review a sample of case files from states as part of their oversight of the quality of data for the WIA Adult and Dislocated Worker Programs, but they have not used the results of these reviews to identify systemic issues with the quality of the data on WIA participants. According to officials from DOL’s national office, its regional offices are responsible for providing feedback to states based on these reviews, and it is not the national office’s role to conduct any type of systemic review to identify cross-state data issues. Government standards for internal controls state that for oversight and monitoring to be effective, information should be communicated to management and others, along with the use of this information for program assessment, so that it can be used to carry out their internal control and other responsibilities. In response to a recommendation from a prior GAO report, DOL began to require its regional offices to review a sample of case files to monitor states’ annual data validation procedures. In addition, over the past few years, to address concerns about the nationwide consistency of monitoring activities, DOL has issued additional guidance to its regional offices on the process that should be used in reviewing the case files. Officials from all six of DOL’s regional offices reported using these materials when they design and conduct their reviews of the case files. The review process begins when officials from DOL’s regions review a state’s most recent data validation report and identify a subsample of participant case files from the most recent review by the state. In addition to checking the data in the electronic records by comparing them to the source documentation, DOL’s regional staff assess whether the state followed the proper procedures in conducting its annual data validation efforts, according to officials from DOL’s national office. The process concludes with a report outlining the DOL regional office’s findings, including non-compliance with statutory or regulatory requirements for collecting and reporting data on WIA participants. Many of the reports also identify areas of concern, such as when states do not share their annual data validation results with local area staff from American Job Center partner programs, and identify promising practices observed during the reviews. State officials have 45 days to respond to findings in the region’s report and are also encouraged but are not required to respond to areas of concern detailed in the report. In addition, officials from DOL’s national office noted that states with high error rates on select variables are encouraged to inform the regional offices of how they plan to reduce their error rates in the future. Officials from DOL’s national office said that, while they discuss the results of these reviews with regional officials and provide state-specific technical assistance as needed, they do not have a regular, formal process for analyzing the findings from these reviews by their regional offices, including determining whether similar findings and areas of concern were identified across states. DOL officials explained that, because the reviews are part of the regional offices’ oversight of the states, they believe that the national office should not be involved in monitoring the results of the reviews or the way in which they are conducted. As a result, DOL does not have a systematic means of determining the importance of the findings, their prevalence, or their likely effect on the quality of the national data on participants in the WIA Adult and Dislocated Worker Programs. This limits DOL’s ability to respond to data issues that are systemic or widespread. Table 2 summarizes our analysis of the most prevalent issues identified during the most recent reviews for each of the 53 states and territories. Over the past few years, DOL has issued additional guidance and provided technical assistance to states and local areas, including training and webinars, to clarify and explain the requirements for collecting and reporting data for the WIA Adult and Dislocated Worker Programs. However, some state officials said that DOL’s technical assistance is not always timely, and that DOL could do more to facilitate the collection or sharing of promising data collection and reporting practices across regions and states. For example, DOL has provided general technical assistance on data reporting for the WIA Adult and Dislocated Worker Programs to states, and officials from three of the eight states said that the assistance provided by DOL’s regional offices was useful in helping them address some of the challenges related to data reporting. In particular, officials from DOL’s national office said that some regional offices issue quarterly performance letters to states that include program year performance data and any related analysis, in addition to hosting quarterly phone conferences with state performance specialists to discuss performance issues. DOL also sponsors the Workforce3one website, which contains a variety of training and background materials related to data collection and reporting for WIASRD. National and regional DOL data specialists also said they hold biweekly meetings to discuss data issues. In addition, DOL officials from each region described regular communication they have with state officials to provide technical assistance in response to specific data reporting issues, and officials from five of the six regional offices described conference calls that they have hosted as opportunities for states to discuss challenges related to the quality of their data on WIA participants. Furthermore, DOL’s national and regional offices have access to an internal data system, Infospace, which allows them to retrieve and review publicly available WIASRD data by state and local area. Over the past few years, DOL has also issued a number of Training and Employment Guidance Letters and Training and Employment Notices related to data collection and reporting for WIA. In 2011, DOL developed and hosted a series of webinars for states and local areas, including one on data validation and data quality issues. The materials from this session describe the approach DOL uses to monitor the data and explore issues and findings across states. They also highlight how data validation can be used to improve data systems and provide information on the guidance regional offices use to review states’ case files. While the webinar materials present information on some of the consistent findings across states—such as incorrect source documentation and exit dates— DOL officials said that they have not provided any additional formal technical assistance to address these issues on a national level because of resource limitations. Moreover, they said it would be difficult for DOL to provide such assistance without having first reviewed the results of its own monitoring efforts. DOL’s webinar also noted that DOL would establish a working group in the summer of 2011 to look into and possibly revise the source documentation requirements for the annual data validation process. However, as of the summer of 2013, DOL officials said that this working group had not been established due to competing priorities and resource constraints. Officials from two states noted that DOL’s updates to its guidance for collecting and reporting the data are often not provided far enough in advance to be implemented by the time changes take effect. For example, officials from California said that they often do not have enough lead time to properly implement new data elements or guidance when it is issued by DOL. According to state officials, the state generally has 1 month or less to change its automated system to meet deadlines, which is not enough time. In addition, officials from Georgia said the data validation reports they currently receive from DOL are not timely since they are at least 3 months old by the time the state receives the reports. Some state officials also told us that they would benefit from learning how their peers are addressing challenges in reporting data on the WIA programs. Officials from DOL’s national office, however, told us they do not currently facilitate the collection or sharing of promising data collection and reporting practices across regions or states, and have no plans to do so—because of competing resources, the agency’s main focus is on service delivery rather than data collection. Best practices state that high- performing organizations continually assess performance and efforts to improve performance. In particular, managers can use performance information to identify and share more effective processes and approaches. In addition to its regular monitoring activities, DOL has taken specific steps to improve the consistency of the WIA data collected by states and local areas. For example, the agency has developed an integrated data reporting system, which is being piloted in two states. However, DOL has not evaluated the results of the pilot program to determine whether it has had a positive effect on the quality of participant data for the WIA Adult and Dislocated Worker Programs and, despite not having evaluated its effectiveness, plans to expand the program to additional states. To standardize and streamline reporting across several of DOL’s workforce programs—the Wagner-Peyser Program, the WIA Adult, Dislocated Worker and Youth Programs, Veterans Employment and Training Service, National Emergency Grants, and Trade Adjustment Assistance Programs—the agency has developed an integrated data reporting system, WISPR. Two states, Pennsylvania and Texas, have been piloting WISPR since 2007 to collect and report data on the WIA Adult and Dislocated Worker Programs, and a third state—Utah—plans to start piloting WISPR in the fall of 2013. Officials from DOL’s national office and from Utah explained that one of WISPR’s key advantages over the current separate reporting systems for each program is that WISPR has standardized variables that include all the required variables for each program. improvement over their current system because future changes in DOL’s guidance for any of the workforce programs it administers—including the WIA Adult and Dislocated Worker Programs—would be incorporated into a single system, facilitating implementation of these changes. Although DOL’s guidance for reporting data in WIASRD encourages states and local areas to provide integrated services through multiple programs, each program has its own reporting requirements, according to officials from DOL’s national office. As a result, it is not possible to track individual job seekers who receive services from multiple programs across the workforce system, or to determine the proportion of resources provided by each program for a particular service, or to attribute participant outcomes to those programs. the current record layout of WIASRD to match that of WISPR. DOL officials told us they expect to implement the revised WIASRD record layout in the fall of 2013. However, as of August 2013, they said it is not clear whether or when nationwide implementation of WISPR will occur because this depends on the resources available to upgrade both federal and state information systems and the associated programming costs. While WISPR appears to offer advantages over the current reporting system that might make it a promising step forward, DOL does not currently have plans to evaluate the results of the pilot program to determine whether it has had a positive effect on the completeness and consistency of participant data for the WIA Adult and Dislocated Worker Programs before expanding the program to other states. DOL officials cited the agency’s limited resources as the reason for not planning an evaluation of the WISPR pilot program before expanding it to other states. However, best practices note that evaluation can play a key role in program planning, management, and oversight by providing feedback to program managers, legislative and executive branch policy officials, and the public. Further, when pilot programs are designed to produce change—such as by allowing for more streamlined data collection and reporting—assessing the impact is essential for knowing if the pilot is meeting its goals. Without an evaluation of WISPR, DOL will not know if this data system has resulted in the collection of more accurate WIA participant data when compared to WIASRD. Finally, DOL administers two grant programs that states can use to improve their WIA information systems: the Workforce Innovation Fund and the Workforce Data Quality Initiative.Fund is a competitive grant program that supports innovative approaches to the design and delivery of employment and training services. Although it is not targeted specifically at information systems, at least 3 of the 26 grants awarded by DOL have been used for local initiatives to integrate their workforce data systems. For example, one local area we visited in Chicago received a Workforce Innovation Fund grant that they plan to use to integrate the different data systems used by the workforce programs in their local area. Officials said that they hope the improvements will result in more data-driven decisions about service delivery, and that all employment and training programs in their local area will be able to share information electronically. Another grant program, DOL’s Workforce Data Quality Initiative, is also not specifically aimed at WIA data reporting but may have positive incidental effects on WIA data quality, according to DOL officials. The purpose of this initiative is to create a longitudinal database to chart individuals’ progress through the education system and beyond to the labor market. This effort would entail upgrades to state information systems that may resolve some data reporting issues currently attributed to limited technological capacities. At this time, it is too early to know whether these grants will have a positive effect on the quality of WIA participant data. Collecting and reporting consistent and complete data is important for program oversight and management and to evaluate the effectiveness of program activities and services, but it can be difficult when federal programs are carried out in partnership with states and local areas. DOL has taken steps to improve the quality of the data on WIA’s Adult and Dislocated Worker Programs. However, the flexibilities in DOL’s guidance, reflective of those inherent in the programs’ authorizing statute, which allows states flexibility in program design, along with limitations in state information systems, present challenges to DOL in collecting and reporting consistent and complete data on a unique count nationwide of participants in the WIA Adult and Dislocated Worker Programs. Without such data, policymakers, program officials, and other stakeholders have an incomplete picture of the number of adults and dislocated workers served, their characteristics, and the type and level of services received. In addition, while DOL engages in several types of oversight activities designed to ensure the accuracy of states’ data on participants in the WIA Adult and Dislocated Worker Programs, it does not consistently share the results of its oversight activities with states and local areas. As a result, states and local areas are not always aware of potential data quality issues, and may miss opportunities to improve their data collection and reporting. Moreover, since 2007, two states have been piloting a new information system that tracks program participants across several of DOL’s employment and training programs, but DOL does not plan to evaluate its effects on the quality of the data collected on participants in the WIA Adult or Dislocated Worker or other programs before it expands the system to other states. Similarly, DOL does not regularly collect and disseminate promising practices to states and local areas, which could facilitate the adoption of steps other states and local areas have taken to improve their data collection and reporting efforts. While it may not be possible to achieve 100 percent precision and accuracy in the data reported on participants in a large, complex system like the workforce investment system, by not appropriately targeting their available resources and facilitating sharing of promising practices among states to continuously try to improve the quality of the data, DOL misses an opportunity to identify and address longstanding, systemic issues. 1. To improve the consistency and completeness of national data on participants in the WIA Adult and Dislocated Worker Programs, we recommend that the Secretary of Labor take additional steps to improve the uniformity of participant data reported by states. These steps could include the following: a. providing additional guidance to states on data reporting, such as how core and intensive services should be recorded for WIA participants who receive these services through partner programs; and b. conducting an evaluation or review of WISPR to determine if it has resulted in more complete and consistent data collection and reporting for participants in the WIA Adult and Dislocated Worker Programs and placing a high priority on the implementation of WISPR if it is shown to improve data consistency and completeness. 2. We also recommend that the Secretary of Labor promote a formal, continuous process for improving the quality of data on participants in the WIA Adult and Dislocated Worker Programs through such measures as the following: a. consistently sharing the results of all oversight activities with states and local areas, including findings from validation of participant data; b. reviewing the methods used for data validation, such as its scope and error rate threshold, to identify opportunities to increase efficiencies and accountability in the process. This could include implementing, if appropriate, recommendations from the Regions’ review of data validation procedures; c. evaluating data validation efforts to determine their effects on data quality, particularly on systemic errors, and providing targeted guidance and assistance to states and local areas to address such errors; d. regularly monitoring Social Policy Research Associates’ corrections and analyses of state WIA participant data, sharing this information with states, and coordinating with states to ensure that any corrections are appropriate and accurate; and e. collecting and disseminating promising practices to states and local areas on data collection and reporting on a regular basis. We provided a draft of this report to officials at DOL for their review and comment. We received written comments from DOL, which are reproduced in their entirety in appendix III. DOL officials did not state whether they agreed or disagreed with our recommendations. These officials acknowledged the importance of having reliable data to effectively manage and evaluate the WIA Adult and Dislocated Worker Programs; however, they commented that data reliability should be balanced with the flexibility WIA gives to states and with DOL’s responsibility to prioritize use of its limited resources. They stated that WIA provides states and local areas with the flexibility to serve their customers in the way that best suits their particular needs. DOL officials also stated that the agency has invested significant resources in its workforce performance accountability system, especially for WIA programs. According to officials, the agency has a robust system in place to ensure data quality and reliability and has recently made several enhancements to the reporting system. In their comments, DOL officials detailed various efforts they plan to take to address our recommendations. Nonetheless, we believe that these efforts will not sufficiently address the specific data quality issues we identified and encourage DOL to take more targeted steps as outlined in our recommendations. In response to our first recommendation, DOL officials said that they believe the agency’s current guidance is clear but that they will continue to work with states to develop additional guidance, as necessary, such as forthcoming guidance on how to avoid duplication of services when co- enrolling participants across multiple partner programs. However, it is important that any additional guidance also specify when to count job seekers as WIA participants if they also receive services funded by partner programs. We also encourage DOL to develop additional guidance for the WIASRD variables noted in our report that are open to interpretation, such as “type of training,” to facilitate consistent reporting on participants in these programs. DOL officials also noted that an evaluation of WISPR is subject to the agency’s resource constraints, adding that the purpose of WISPR was never explicitly to improve data quality. As we stated in our report, however, WISPR seeks, in part, to improve the consistency of WIA data by standardizing reporting across the workforce system. As such, it has the potential to improve data quality. Therefore, we encourage DOL to evaluate the system in order to make an informed decision on how best to allocate finite agency resources going forward. In response to our second recommendation, DOL officials stated that they consistently share the results of the agency’s oversight activities with states but acknowledged that more could be done to analyze the results of its activities to identify and share similar findings and areas of concern across multiple states. DOL officials added that they will work with the regional offices towards this goal. With regard to the agency’s data validation methods, officials said they regularly review these methods and solicit input from states on how to improve them. Specifically, DOL pointed out that, as required by the Paperwork Reduction Act, the Office of Management and Budget reviews DOL’s data validation process every 3 years and solicits public comment before approving the methodology and authorizing data collection, and that their 2014 submission will reflect state input. However, given the time-consuming nature of data validation, we believe the agency should take additional actions to review its current methods specifically with an eye toward making them more efficient and holding states accountable for their data validation results. With regard to evaluating the effectiveness of its data validation efforts, DOL officials said that the agency plans to consider the regional data validation workgroup’s findings and recommendations from 2011, explore ways to streamline the process, and examine the effect of data validation on error rates. We commend DOL’s plans, but to adequately address the persistently high data error rates we found in our analysis, we believe it is necessary to go beyond evaluating the effectiveness of its data validation efforts and pinpoint the underlying cause of the errors so that they can be addressed. Officials also pointed out that the agency already monitors and shares the analyses of state data conducted by its contractor, Social Policy Research Associates (SPRA). They stated that SPRA’s corrections of states’ data have been publicly available with the data set from the inception of WIA. They also noted that since program year 2011 DOL has provided SPRA’s analyses and corrections to the states (through ETA Regional Offices) on a quarterly basis for states to either correct or dispute. They noted that this is a formal and recurring process, and that ETA Regional Offices have begun to analyze state date WIASRD data on a regular basis as part of their annual review cycles. However, we found that not all states are aware of or receive copies of SPRA’s reports, and that some of the corrections SPRA makes to the state WIASRD data files may not be accurate. Furthermore, as noted in our report, DOL officials told us that they do not conduct formal oversight reviews or audits of SPRA’s data analyses. In addition, DOL officials reiterated that data collection and reporting are topics that are included in workforce3one, a point we noted in our report. However, we maintain that DOL could do more to facilitate the sharing of information across states, such as creating a forum through which states could learn how their peers are addressing challenges in data reporting for participants in the WIA programs. Finally, DOL provided technical comments, which we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Labor, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objectives were to determine: (1) what factors have affected the ability to report consistent and complete data on participants in the Workforce Investment Act (WIA) Adult and Dislocated Worker Programs, and (2) what actions has the Department of Labor (DOL) taken to improve the quality of participant data. To address our objectives, we reviewed applicable laws and regulations, as well as DOL’s guidance to states for collecting and reporting data on participants in the WIA Adult and Dislocated Worker Programs. We also interviewed officials from DOL’s Employment and Training Administration, its Office of Inspector General, and its six regional offices. In addition, we visited or telephoned a nongeneralizable sample of eight states. Within each state, we visited or contacted at least one American Job Center—formerly known as a one- stop center—or a local workforce board. We also assessed the reliability of program year 2011 data from the Workforce Investment Act Standardized Record Data (WIASRD) system by testing the data electronically and interviewing knowledgeable agency officials and DOL’s data contractor. We found the data in appendix II to be sufficiently reliable for the purposes of providing estimates of the number of, characteristics of, and services provided to participants whose information is recorded by DOL as having received services from either the WIA Adult Program or the WIA Dislocated Worker Program. The data are not reliable for other purposes, such as making state-to-state comparisons, because of variations in how states collect and report data on participants in the WIA Adult and Dislocated Worker Programs. We conducted these interviews between September 2012 and June 2013. with DOL officials in Regions 1 (Boston), 3 (Atlanta), and 5 (Chicago); state and local workforce officials in California, Georgia, Illinois, Massachusetts, and Washington; and American Job Center officials in Maryland. We conducted telephone interviews with DOL officials in Regions 2 (Philadelphia), 4 (Dallas), and 6 (Sacramento), and with state workforce officials in Maryland, South Dakota, and Utah. We nonstatistically selected these states to provide diversity on the basis of: (1) geographic location, (2) total federal spending on the Adult and Dislocated Worker Programs in program year 2010, (3) the extent of data issues identified in the fourth quarter of program year 2010, (4) whether the state reported participants who only received core self-services, and (5) the number of local areas within the state. In each state, we obtained general information about the state’s and the local area’s implementation of the WIA Adult and Dislocated Worker Programs and on any challenges they may have encountered in collecting and reporting data on program participants. We also asked about actions DOL has taken to improve the quality of the data on participants in the WIA Adult and Dislocated Worker Programs. We used semi-structured interviews for our regional, state, and local interviews. Because we interviewed officials from a nongeneralizable sample of eight states and selected local areas, we cannot generalize our findings beyond the data collected on those states and local areas. To assess the reliability of DOL’s data in the WIASRD database for participants in the WIA Adult and Dislocated Worker Programs, we (1) reviewed existing documentation related to the data sources, including reports issued by DOL’s Office of Inspector General; (2) electronically tested the WIASRD data to identify potential problems with consistency, completeness, or accuracy; and (3) interviewed DOL’s data contractor and knowledgeable agency officials to obtain information about the data. Our electronic testing consisted of identifying inconsistencies, outliers, missing values, and other errors. More specifically, the electronic testing included assessing the reliability of data collected on the characteristics and the services participants in the WIA Adult and Dislocated Worker Programs received in program year 2011. Prior to testing the data, we combined 160 records that were overlapping or duplicative into 80 unique records and removed 19 records that had missing or erroneous participation dates. A few variables, including data on a participant’s dislocation date—the date a worker lost his or her job, and the occupational codes for participants who completed training, were found to not be sufficiently reliable for our purposes and were not included in our report. In addition, we analyzed the publicly available WIASRD data file for program year 2011, which was produced for DOL by its data contractor, Social Policy Research Associates. As part of our analysis, we reviewed the steps the data contractor took to correct the data and, to the extent possible, compared our data to the publicly available file. In light of variations in how states collect and report participant data for the WIA Adult and Dislocated Worker Programs and limitations in their information systems, the actual number of participants in these programs is unknown. However, we were able to estimate the number of, characteristics of, and services provided to participants whose information is recorded by DOL as having been served by the WIA Adult and Dislocated Worker Programs using WIASRD data from program year 2011. To describe and assess DOL’s oversight and monitoring efforts, we reviewed technical assistance guides and material posted to Workforce3One, including DOL’s Core Monitoring guides and Data Validation Reporting System guidance. We also interviewed officials from DOL’s Employment and Training Administration national office and from all of DOL’s six regional offices. In addition, we obtained and reviewed copies of DOL’s monitoring reports, including the results of DOL’s program year 2010 and 2011 annual data validation efforts and the most recent case file review for each state and territory. To analyze the results of the annual data validation efforts, we calculated the average reported error rate for each variable across states. We included in our analysis all variables on characteristics and services. To analyze trends in the results of the case file reviews, we reviewed the findings and areas of concern identified in each review and categorized them to identify common issues present in multiple states. We conducted this performance audit from August 2012 through November 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit work to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We reviewed the data collected by the Department of Labor (DOL) in the Workforce Investment Act Standardized Record Data (WIASRD) system on the number of, characteristics of, and services provided to participants in the WIA Adult and Dislocated Worker Programs in program year 2011. We found the data to be sufficiently reliable for the purposes of providing estimates of the number of, characteristics of, and services provided to, participants whose information is recorded by DOL in WIASRD. These estimates are presented in figures 4 through 9. In addition to the contact named above, Meeta Engle, Assistant Director; Theodore Alexander; Jenn McDonald; and Brian Schwartz made key contributions to this report. Also contributing to this report were Jessica Botsford, David Chrisinger, Kathy Leslie, Mimi Nguyen, Carol Patey, Rhiannon Patterson, Catherine Roark, Jerry Sandau, Walter Vance, and Charles Youman. Workforce Investment Act: Local Areas Face Challenges Helping Employers Fill Some Types of Skilled Jobs. GAO-14-19. Washington D.C.: December, 2013. Workforce Investment Act: Additional Actions Would Further Improve the Workforce System. GAO-07-105IT. Washington, D.C.: June 28, 2007. Workforce Investment Act: Employers Found One-Stop Centers Useful in Hiring Low-Skilled Workers; Performance Information Could Help Gauge Employer Involvement. GAO-07-167. Washington, D.C.: December 22, 2006. Workforce Investment Act: Labor and States Have Taken Actions to Improve Data Quality, but Additional Steps Are Needed, GAO-06-82. Washington, D.C.: Nov. 14, 2005. Workforce Investment Act: Labor Should Consider Alternative Approaches to Implement New Performance and Reporting Requirements. GAO-05-539. Washington, D.C.: May 27, 2005. Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Workforce Investment Act: Labor Actions Can Help States Improve Quality of Performance Outcome Data and Delivery of Youth Services. GAO-04-308. Washington, D.C.: February 23, 2004. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002.
Having reliable program data is important in effectively managing a program. However, there have been longstanding concerns about the quality of data on job seekers enrolled in the WIA Adult and Dislocated Worker Programs, which rely on states and local areas to track participants and the services they receive. Given these concerns and WIA's anticipated reauthorization, GAO was asked to examine the data on these WIA participants. This report addresses: (1) the factors that have affected the ability to report consistent and complete data on participants in the WIA Adult and Dislocated Worker Programs, and (2) actions that DOL has taken to improve the quality of these data. To conduct this work, GAO reviewed relevant federal laws, regulations, guidance, and documentation from DOL. GAO interviewed officials from DOL's national and regional offices and state and local workforce officials from a nongeneralizable sample of eight states. GAO also analyzed WIA data from program year 2011 to determine the number of, characteristics of, and services provided to WIA participants. Flexibility in the Department of Labor's (DOL) data reporting guidance and limitations in some state information systems continue to impair the quality of the data on participants in the Workforce Investment Act (WIA) Adult and Dislocated Worker Programs. The flexibility in the guidance stems from the inherent nature of WIA, which allows states and local areas to tailor program design and service delivery to their needs. As a result, DOL's guidance on collecting and reporting the data allows variation in how some WIA data elements are defined, collected, and reported. Specifically, an American Job Center--formerly known as a one-stop center--can choose to provide certain basic services exclusively through WIA programs, exclusively through a partner program, or through a blend of both WIA and partner programs. However, this flexibility involves variations in data reporting that have contributed to inconsistencies among states regarding when job seekers are counted as WIA participants. Moreover, some aspects of DOL's guidance are open to interpretation, leaving it to states to define variables such as type of training service received, further contributing to data inconsistencies. In addition, some state information systems used to collect and report WIA participant data have limitations that hamper the affected states' ability to report uniform and complete data. For example, data are incomplete to the extent that states may not have information systems that can track participants who access services online without significant staff assistance. Having inconsistent and incomplete data makes it difficult for DOL to compare data on program participants across states or to aggregate the data at a national level. DOL engages in various oversight activities designed to ensure the accuracy of states' data on participants in the WIA Adult and Dislocated Worker Programs and has taken steps to improve data consistency across states. However, DOL does not consistently use the results of its oversight to identify and resolve systemic data issues nor has it evaluated the effect of oversight on the quality of WIA participant data. Specifically, DOL requires states to validate the data they collect and report on participants in the WIA Adult and Dislocated Worker Programs on an annual basis, but it does not strategically use the findings from this effort to identify systemic data issues or improve the quality of the data. Similarly, although DOL's regional offices review a sample of each state's WIA participant files every few years to assess states' compliance with data reporting and validation requirements, DOL officials said they have not analyzed the findings from the most recent reviews to identify nation-wide reporting issues. DOL has taken steps to improve the consistency of the data by providing general technical assistance to states and local areas and through standardizing the way DOL collects WIA data. For example, since 2007, two states have been piloting a unified reporting system developed by DOL that uses standardized data definitions and is integrated across certain American Job Center programs administered by DOL. However, DOL officials said they have no plans to evaluate the system before expanding it to other states. Without an evaluation, DOL does not know what impact the pilot has had on the quality of WIA participant data. GAO recommends that DOL take steps to improve the consistency and completeness of data reported across states and to promote a continuous process for improving the data's quality. DOL officials did not agree or disagree with GAO's overall recommendations and detailed how data quality is being addressed primarily through existing efforts. However, GAO believes that the recommendations remain valid as discussed in the report.
The Secretary of Commerce is legally required to (1) conduct the census on April 1 of the decennial year, (2) report the state population counts to the President for purposes of congressional apportionment by December 31 of the decennial year, and (3) send population tabulations to the states for purposes of redistricting no later than April 1 of the year following Census Day. The Bureau has defined over 40 different operations in its high-level requirements document, describing all of the planned operations and systems needed to meet these mandates. For the 2010 Census, the Bureau is using a comprehensive master schedule to integrate the work to be carried out in the dozens of operations. The schedule provides a high-level roadmap for Bureau executives and is used to alert executives to activities that are behind schedule or experiencing issues, allowing problems to be addressed so the census can continue to proceed on track. Staying on schedule is crucial to accomplishing all of the tasks involved in conducting the census. In fact, scheduling and planning are so important that the Bureau has already established a high-level schedule for planning the 2020 Census. While the schedule can be used to manage census operations at a high level and dictates major time allocations and deadlines, local census offices across the nation require more detailed plans to conduct enumeration that exceed the detail included in the master schedule. A successful census depends, in large part, on the field work carried out in these local census offices where employees on the ground in local communities build a list of where to count people and count people who do not return their census forms. As we have previously reported, the Bureau had initially planned to carry out major field data collection activities using hand held computing devices. Development and performance problems with the hand held device led the Secretary of Commerce in April 2008 to abandon using the device for most of its intended operations and resulted in the Bureau removing the NRFU operation from the 2008 Dress Rehearsal. As a result, the Bureau was not able to use the dress rehearsal as a comprehensive end-to-end test of the interoperability of all of its planned systems, and the Bureau has had to develop plans to support and conduct the affected operations on paper as it did for the 2000 Census. For the 2010 Census, the Bureau will manage remaining fieldwork activities with PBOCS. This system is intended to provide managers with essential real-time information, such as worker productivity and completion rates for field operations. It also allows managers in the field to assign or reassign cases among workers. If the system does not work as intended, it could hinder or delay field operations and introduce errors into files containing collected data. Another responsibility of field offices is implementing the quality control process established by the Bureau to ensure that correct information is collected by field staff and data are not falsified. To ensure data quality and consistency of quality control procedures, the Bureau will manage, track, match, and review answers provided during re-interview operations using its Census Matching Review and Coding System (Census MaRCS). This system is also to designate quality control assignments where selected households will be re-interviewed in order to determine that the original enumerator correctly conducted the interview. Census MaRCS is also to assist in identifying interviews where the data from the re-interview do not match the data from the original interview, indicating that a mistake has been made. Both PBOCS and Census MaRCS are key systems that have not been fully tested. Since 2005, we have reported concerns with the Bureau’s management and testing of key information technology systems. In March 2009, we reviewed the status of and plans for the testing of key 2010 Census systems. We reported that while the Bureau has made progress in conducting systems, integration, and end-to-end testing, critical testing against baseline requirements still remained to be performed before systems would be ready to support the 2010 Census and the planning for the testing needed much improvement. While the Bureau has made noteworthy progress in gearing up for the enumeration, with less than a year remaining until Census Day, uncertainties surround the Bureau’s overall readiness for 2010. The Bureau has implemented processes around its master schedule that comply with a number of scheduling process criteria that are important to maintaining a schedule that is a useful management tool. Such a schedule can provide a road map for systematic execution of a program and the means by which to gauge progress, identify and address potential problems, and promote accountability. We have documented the importance of adhering to these criteria and of implementing associated best practices in our GAO Cost Estimating and Assessment Guide. According to these criteria, a schedule should be comprehensive, with logically sequenced activities spanning the scope of work to be performed so that the full picture is available to managers; current, with the progress on ongoing activities updated regularly so that managers can readily know the status of the project; and controlled, with a documented process for changes to the schedule so that the integrity of the schedule is assured. The Bureau’s master schedule represents all 44 of the operations described by the broad requirements for the census in its 2010 Census Operational Plan. While the Bureau continues to add activities to its central schedule, by including at least all the activities described in these broad requirements, the Bureau is ensuring that it has a comprehensive schedule that will be less likely to miss critical interactions between operations. The Bureau ensured a complete scope of the schedule with input from stakeholders throughout the agency, with reviews of previous schedules, and building on a number of census tests during the decade. As a result, the schedule is the primary source for senior managers, on a weekly basis, to determine what census activity is ahead of or behind schedule and provides a resource for determining any impact to the overall project of delays in major activities. The Bureau has documented and implemented a formal process for keeping the data in the schedule current. Staff within each Bureau division are responsible for ensuring that schedule activities within their division have their status updated on a weekly basis. Staff update the actual start and finish dates, the percentage of an activity completed so far, and estimates of the time remaining to complete each activity in progress. The Bureau is recording status information on an average of more than 1,300 activities in the schedule ongoing during any given week, generating historical data that could provide valuable input to future schedule estimates. Finally, the Bureau has implemented a formal change control process that preserves a baseline of the schedule so that progress can be meaningfully measured. The Bureau’s criteria for justifying changes are clearly documented and require approval by a team of senior managers and acknowledgment of the impact by each affected team within the Bureau. Since the master schedule was baselined in May 2008, about 300 changes have been approved. Even corrections to the schedule for known errors, such as incorrect links between activities, must be approved through the change control process, helping to ensure the integrity of the schedule. In addition to these practices, the Bureau has positioned itself to monitor the schedule regularly to help ensure that the census is progressing and that work is being completed as planned. A central team of staff working with the schedule implements a process that begins with the weekly updates of the schedule status and involves subject matter experts from multiple divisions, and monitors and resolves schedule-related issues, resulting in a weekly briefing to the Deputy Director and the Director of the Census, which includes documented explanations for critical activities scheduled to start late. For example, the central team began reporting from the master schedule in March 2009 that the printing of questionnaires for a field operation to validate locations of group quarters in September and October 2009 might be running late. The schedule showed that late printing of the questionnaires would trigger their late delivery and the late assembly of job assistance kits needed to support the operation, and thus put the timeliness of the operation in danger. According to a Bureau official, the Bureau then addressed the issue by deciding to unlink the kit assembly from the questionnaire printing, allowing kits to begin assembly on time and having questionnaires delivered directly to field offices when they were ready, letting the operation begin on time. When we began analyzing the Bureau’s master schedule, we discovered a significant number of activities in the schedule that had either missing or inaccurate information describing their relationships with other activities in the schedule. We brought these to the Bureau’s attention, and the Bureau has begun systematically identifying such activities and correcting their information in the schedule. In accordance with scheduling best practices, activities in the schedule should be linked logically with relationships to other activities that precede or follow them, and they should be linked in the correct order. Since reports that the Bureau uses to manage the census depend on the schedule having been built properly, inconsistent adherence to these scheduling practices has occasionally created false alarms about the schedule and created unnecessary work for those who have had to resolve them. In our analysis of the Bureau’s schedule, we found that nearly all relationships between activities are generally in place in the schedule and some activities in the schedule do not need relationships. However, many activities appeared in the schedule missing one of their logical relationships. From January 2009 through August, an average of more than 1,200 of the more than 11,000 activities in the entire schedule were missing relationships to other activities from either their start or end dates. Each month, on average, over 1,100 of the over 6,100 as yet not completed activities were missing relationships. For example, within the Bureau’s master schedule, an activity listed for receiving finished materials from the NRFU re-interview operation appeared in the schedule with no relationship to subsequent activities, making it appear that any delays in its completion would have no impact on subsequent census activities. While the absence of such a relationship in the schedule does not imply that the Bureau would miss the potential impact of any delays in the completion of this activity, the incidence of a large number of such missing relationships can confound attempts to trace the chain of impacts that any delays may have throughout the schedule. Similarly, we found a small number of activities in the schedule that had been linked together in the wrong order, so that one activity might appear to finish before a necessary prior activity had been completed. Such an incorrect relationship can unnecessarily complicate the use of the schedule to guide work or measure progress. The number of such apparent out-of-sequence activities in the entire schedule has decreased from on average more than 100 each month in January through March to 60 in August. Since June 2009, Bureau staff have been running structured queries on the data supporting the master schedule to identify activities with missing or incorrect data; researching each activity to determine what, if any, corrections to the data are needed; forwarding proposed changes to affected activities, operation by operation, to program officials for review; and submitting changes to the Bureau’s formal change control process. The Bureau reports that since this concerted effort began to correct such errors that affect activities in 37 different census operations, the Bureau had completed research for activities in 15 of the operations and approved changes in 12 of them by early October 2009. The Bureau also informed us that this review process involving the program officials responsible for the logic errors has provided an educational opportunity for the officials to see how their programs can directly affect others, and as a result has heightened awareness about the importance of getting schedule information keyed in correctly. A schedule provides an estimate of how long a given work plan will take to complete. Since the duration of the work described by the activities listed in a schedule is generally uncertain, a schedule can be analyzed for the amount of risk that its underlying work plan is exposed to. Schedule risk analysis—the systematic analysis of the impact of a variety of “what if” scenarios—is an established best practice to help identify areas of a schedule that need additional management attention. Conducting a schedule risk analysis helps establish the level of confidence in meeting scheduled completion dates and the amount of contingency time needed for given levels of confidence, and helps identify high-priority risks to a schedule. The Bureau is tracking risks to the census and managing those risks on a regular basis, as documented in the 2010 Census Risk Management Plan, but these data are not being mapped into the schedule at a level that can be used for a systematic schedule risk analysis. A well-defined schedule should help identify the amount of human capital and financial resources that are needed to execute the programs within the scope of the schedule, providing a real-time link between time and cost and helping to reduce uncertainty in cost estimates and the risk of cost overruns. However, the Bureau does not link within its schedule estimates of resource requirements—such as labor hours and materials—to respective activities. Having this information linked in a schedule enhances an organization’s capability to monitor, manage, and understand resource productivity; plan for the availability of required resources; and understand and report cost and staffing requirements. For example, if the Bureau were to find itself behind schedule with major operations to be completed, and resource requirements were linked in the schedule, the Bureau could then better assess the trade-offs between either adding more resources or reducing the scope of the operations. In addition, when resources are linked to activities in the schedule, scheduling tools can identify periods of their peak usage and assist managers with reordering activities to level out demands on potentially scarce or costly resources. When we met with Bureau officials and discussed this, they pointed out that incorporating this schedule best practice would be difficult to do late in the preparations for 2010, but they expressed interest in incorporating this schedule best practice as a step forward in the Bureau’s use of the schedule to manage decennial censuses. Finally, the Bureau’s use of a master schedule in 2010 that is, according to the Bureau, more highly integrated into the management of the decennial census provides an opportunity to draw many potential lessons for 2020. The Bureau learned lessons from its use of the 2000 master schedule as documented in a 2003 Bureau management evaluation, in particular with its adoption of the formal change control process implemented for 2010. Yet as noted in the evaluation, there were questions about the quality of the data maintained in the schedule. Without a reliable change control process, the schedule did not provide a reliable baseline, making evaluation of schedule and activity duration estimates difficult, if not impossible. The Bureau is generating a large amount of data—and experience—with its efforts in developing, maintaining, and using the 2010 master schedule. Unless the Bureau prioritizes the need for documenting lessons learned from the current experience—as it did for the 2000 Census—and formally puts in place an effort to capture and analyze schedule data, changes to baselines, and variances between estimated and actual durations, it runs the risk of missing out on another opportunity for using additional lessons some of its staff may already be learning. The automated control system that the Bureau plans to use to help manage the data collection operations of the decennial census still faces significant development and testing milestones, some of which are scheduled to be completed just before the system needs to be deployed before respective field operations begin. As a result, should the Bureau encounter any significant problems during final testing, there will be little time to make changes before systems are needed to support field operations. PBOCS will help manage both paper, and people, and it needs to exchange data successfully with several other Bureau systems, such as one used for processing payroll. The Bureau plans to complete development and testing of PBOCS in three major releases, grouping the releases of parts of PBOCS together loosely by the timing of the field operations those parts are needed to support. The Bureau already completed a preliminary release of PBOCS with limited functionality in June 2009 to support some initial testing. Figure 1 shows the development, testing, and operation periods for the three remaining releases and the operations that PBOCS supports. According to the baseline of the Bureau’s master schedule, PBOCS should be deployed and operational anywhere from 1 to 6 weeks before each operation begins for operations leading up to and including NRFU. According to the Bureau, the system should ideally be ready for use during training periods so that managers can familiarize themselves with the system they will have to use and can begin using the system to assign work to new staff. This also requires that PBOCS be ready in time so that production data can be loaded into the system, as well as information about the employees to whom work will be assigned. For example, PBOCS for NRFU is to finish its final testing in March 2010, about 9 weeks before NRFU is scheduled to begin on May 1, 2010, and deployment is scheduled to take place about 6 weeks before NRFU starts, leaving 3 weeks of contingency time in the event that unexpected problems arise during PBOCS development. This means that if any significant problems are identified during the testing phases of PBOCS, there is generally little time to resolve the problems before the system needs to be deployed. In addition, it will be more difficult for the Bureau to integrate into PBOCS training for users on any late changes in the PBOCS software. While the Bureau relies on last-minute additions to training and procedures documents to communicate late changes to workers, Bureau officials agreed that it can be difficult to incorporate such last-minute additions into training sessions and for users to learn them, and doing so should be avoided if possible. The Bureau also faces the significant challenge of developing the detailed specifications for the software to be developed. As of early-September 2009, the Bureau had established high-level requirements for PBOCS and has reported completing development of release 1 of PBOCS. The Bureau reports that as of late-October its requirements development, system development, and system testing for phase 1 is largely completed. However, the Bureau has not yet finalized the detailed requirements for this release or for later releases. High-level requirements describe in general terms what functions the system will accomplish, such as producing specific management reports on the progress of specific paper- based operations. Detailed requirements describe more specifically what needs to be done in order to accomplish such functions. For example, a detailed requirement would specify the specific data that should be pulled from the specific data set to produce specific columns of a specific report. While high-level requirements provide software programmers with general guidelines on, for example, what types of reports should be produced, without a clear understanding of the detailed requirements, the programmers cannot be sure that they are identifying the correct source of information for producing such reports, and reports can thus be inaccurate. According to Bureau officials, previous contract programmers with little decennial census experience and no involvement with current development efforts made erroneous assumptions about which data to use when preparing some quality control reports that became problematic in the dress rehearsal. Without detailed requirements, the Bureau also cannot be sure how frequently such reports should be updated or which staff should have access to which reports. Further, software developers may not have the required information to meet the Bureau’s needs. Also, as we have previously reported, detailed operational requirements determine system development, and without well-defined requirements, systems are at risk of cost increases, schedule delays, or performance shortfalls. As we have reported and testified numerous times, the Bureau experienced this with an earlier contract to automate the support of its field data collection activity, which included the failed handheld computing device. The Bureau’s PBOCS development managers have told us that they are working closely with stakeholders in an iterative process of short development cycles to help mitigate PBOCS development risks caused by not having detailed requirements written in advance. Embedding subject matter experts within the software development process can help mitigate risk inherent in the short time frame the Bureau has remaining to develop and test PBOCS. Yet the absence of well-documented and prioritized detailed requirements for PBOCS, which still need to be developed and tested, remains among the most significant risks to getting PBOCS ready on time. Furthermore, the Bureau lacks reliable development progress measures that permit estimating which requirements may not get addressed and that are important to ensuring the visibility of the development program to Bureau leadership. Aggressive monitoring of system development and testing progress and of the effort remaining will help ensure that program officials who will rely on these systems can anticipate what risks they face and what mitigation activities they may need for shortfalls in the final systems. In recognition of the serious implications that a failed PBOCS would have for the conduct of the 2010 Census, and to see whether there were additional steps that could be taken to mitigate the outstanding risks to successful PBOCS development and testing, in June 2009 the Bureau chartered an assessment of PBOCS. The assessment team, chaired by the Bureau’s Chief Information Officer (CIO), reported initially in late July 2009 and provided an update report in late August 2009. According to the August update and our discussion of it with the CIO, the team increased its risk rating in two areas of PBOCS that it is monitoring in part because of the absence of fully documented requirements, testing plans, progress measures, and deployment plans. In its comments on the draft of this report, the Department of Commerce provided information describing several steps the Bureau was taking to monitor the progress of PBOCS development. According to Commerce, the Bureau already does the following: Daily Project Management Standup meetings, which cover action item management, calendar review, activity sequencing, and any threats or action-blocking issues. Daily Architecture Review Board and team leads meetings. Weekly Program Management Review Board meetings, and thrice- weekly Product Architecture Review Board meetings. At least weekly review of progress by the PBOCS Internal Assessment Team—chaired by the CIO. The team briefs the Bureau Director at least monthly. Monthly Quality Assurance Board meetings and twice-monthly Risk Review Board meetings. At the end of our review, the PBOCS development team demonstrated two software tools it said it was using to help manage its iterative process of short development cycles. We did not fully assess their use of the tools. The progress measures they demonstrated with one of the tools predicted that the completion dates would be missed. It also showed that the development team was underestimating the development effort required to achieve its iterative development goals. When we noted this, the presenters told us that the information in their management system was not current. Until the Bureau completes the detailed requirements for PBOCS and prioritizes them and its PBOCS development monitoring is relying on current and reliable progress measures, such as those the development team attempted to demonstrate to us for estimating the effort needed to complete remaining development, it will not be able to fully gauge the PBOCS development progress and have reasonable assurance that PBOCS will meet the program’s needs. The Bureau is continuing to examine how improvements will be made. The Bureau has experienced delays in the development and testing of software that will play a key role along with PBOCS in controlling and managing field data collection activity for the quality assurance programs of NRFU and Update/Enumerate. Census MaRCS will help manage the process of identifying systematic or regular violations in the door-to-door data collection procedures. In particular, Census MaRCS will be a tool to help target additional households needing reinterview as part of the quality assurance program for these two major census data collection operations in the field. Therefore, fully developing and testing Census MaRCS will be important to the successful conduct of the census. Like other systems at the Bureau, Census MaRCS had to undergo design changes when the Bureau made the April 2008 decision to switch to paper- based operations. Detailed performance requirements—such as the information to be included on reports and its sources, including performance metrics, such as the number of users the system is designed to handle—are documented and were baselined in May 2009. However, specifications have been added or clarified as software development has progressed and improvements have been suggested. Software development has at times been slower than expected, leading to delays in some testing. Test plans for Census MaRCS software and interfaces are in place, having been documented in May 2009. According to those plans, the Bureau is in the second of three stages of testing Census MaRCS and is scheduled to complete its final stage in December 2009, almost 2 months before its first deployment for the Update/Enumerate operation. Slower-than-expected development has delayed some parts of the second phase of testing, which will thus finish late according to the Bureau, but Bureau officials have indicated that they believe that delay can be absorbed into the schedule, and that the system will be delivered as scheduled in February 2010. The compressed testing schedule leaves little time for additional delays in writing software or conducting tests. The Bureau is working on additional plans to test the interfaces between systems like PBOCS and Census MaRCS to ensure that they work together, but those test plans have not yet been finalized. Since Census MaRCS was not used in the dress rehearsal, and a full end-to-end system test—that is, a test of whether all interrelated systems collectively work together as intended in an operational environment—is not planned for in the time remaining before the system is required to be deployed, successful testing of the interfaces with other systems is critical to the system’s readiness. If development or testing delays persist, it will be more important than ever that system requirements be prioritized so that effort is spent on the “must haves” necessary for system operation. Our review of the Bureau’s master schedule for conducting the 2010 Census and the processes the Bureau uses to manage it suggests that it is doing a commendable job conducting such a large and complex undertaking consistent with leading scheduling practices. Furthermore, the Bureau’s systematic effort to correct errors that we have identified in the schedule will further improve the ability of the master schedule to support senior management oversight and decision making as 2010 approaches. Other improvements, such as embedding estimates of resource needs into the schedule, may take more time to implement. Yet, the Bureau’s generally well-defined and integrated schedule provides an essential road map for the systematic execution of the census and the means by which to gauge progress, identify and address potential problems, and promote accountability. Leveraging the Bureau’s experience with scheduling for 2010 by documenting it should provide lessons learned for similar efforts in 2020 as well. Moreover, since we testified on the status of the 2010 Census in March 2009, the Bureau has made progress on a number of key elements needed to manage the work flow in field operations. In particular, the Bureau has made progress in developing and testing systems to support paper-based operations in the wake of the Secretary of Commerce’s April 2008 decision to switch to paper-based operations for most field data collection activity. That said, some delays are also occurring, and since so much still remains to be done in the months leading up to Census Day, the Bureau has limited time to fix any potential problems that arise in systems that are not thoroughly tested. While the Bureau has made significant progress in developing test schedules for key systems, careful monitoring of the progress in addressing, and setting priorities among, the remaining detailed requirements for the control system supporting paper-based operations is critical for the Bureau to anticipate what risks it faces and mitigations it may need for shortfalls in the final system. Based on the challenges faced in the earlier program implementing handheld computing devices, the Bureau has already experienced the ill effect of having to change its plans when a system does not fully meet planned program needs. With limited time before implementation, it is uncertain whether the Bureau may be able to complete development and fully test all key aspects of its systems, like PBOCS and Census MaRCS, which are still under development. Continued aggressive monitoring by the Bureau and improvements in the progress measures on system development and testing, of effort remaining, and of the risks relating to these efforts is needed. Such effort will help ensure that Bureau leadership, as well as Bureau program officials who will rely on these systems, have early warning on what, if any, desired system features will be unavailable in the final systems, maximizing time available to implement mitigation strategies as needed. We recommend that the Secretary of Commerce require the Director of the U.S. Census Bureau to take the following three actions: To improve the Bureau’s use of its master schedule to manage the 2020 decennial census: Include estimates of the resources, such as labor, materials, and overhead costs, in the 2020 integrated schedule for each activity as the schedule is built, and prepare to carry out other steps as necessary to conduct systematic schedule risk analyses on the 2020 schedule. Take steps necessary to evaluate the accuracy of the Bureau’s baselined schedule and determine what improvements to the Bureau’s schedule development and management processes can be made for 2020. To improve the Bureau’s ability to manage paper-based field operations in the 2010 Decennial Census, finalize and prioritize detailed requirements and implement reliable progress reporting on the development of the paper-based operations control system, including estimates of effort needed to complete remaining development. The Secretary of Commerce provided written comments on a draft of this report on November 3, 2009. The comments are reprinted in appendix II. Commerce did not comment on the first two recommendations, but provided additional information on steps it had already been taking to monitor progress of PBOCS development related to the third recommendation. Commerce commented on how we characterized the status of system development and testing, and provided additional statements about the status. Commerce also made some suggestions where additional context or clarification was needed, and where appropriate we made those changes. With respect to our third recommendation to finalize and prioritize detailed requirements and implement reliable progress reporting on PBOCS, including estimates of effort needed to complete remaining development, Commerce described numerous regular meetings that the development team holds, as well as efforts by others in the Bureau to monitor and report on PBOCS development progress. We added references to these monitoring efforts within the report. We agree that the monitoring efforts the department describes can help assess the progress being made; however, the monitoring efforts can only assess the progress being reported to them. First, a complete set of requirements is needed to understand the work that has been accomplished and the work remaining. As we have noted, the Bureau has not yet developed a full set of requirements. Second, the management tool’s measurement of the development activity successfully addressing the requirements is directly related to the effectiveness of test cases developed for those requirements. However, as we have previously noted, if a requirement has not been adequately defined, it is unlikely that a test will discover a defect. Accordingly, until the Bureau completes the detailed requirements for PBOCS and prioritizes them it is unable to use these tools to fully gauge its progress toward meeting the overall project’s goals and objectives of system development. We clarified our discussion of this in the report and reworded the recommendation to better focus on the need for reliable information. Commerce maintained that our draft report implied that no PBOCS development had begun and that no testing would be completed until March 2010. The draft report described development and testing dates that clearly illustrate that both development and testing have been occurring over several months leading up to their conclusion. We have included additional language in the text to clarify that development and testing take place over a period of time. We have also included a statement from Commerce that much of this activity has largely been completed for the first of three phases. Commerce commented on the accuracy of the dates used in the figures in the draft report. We verified that the dates we used were the correct dates that the Bureau had provided to us earlier. We made minor adjustments to some of the gridlines in the graphic for presentation purposes. The Bureau also provided additional information on the prior testing of matching software, the context for NRFU being dropped from the dress rehearsal, how PBOCS errors could potentially introduce errors into census files, and contract programmers we reported being involved in dress rehearsal PBOCS not being the same ones helping the Bureau with PBOCS development now. We revised the report as appropriate in response. Commerce commented on our discussion of the unreliability of information that the Bureau PBOCS development team provided us during a demonstration of two tools that it uses to help manage its iterative process of short development cycles. Commerce described the use of one of the two tools but not the one whose progress tracking measures were demonstrated to us and for which the team told us that data were not current. We have revised the text to more clearly state that more than one tool was used to demonstrate how development and testing was being managed by the PBOCS development team, and we have added additional language to describe the progress information that should have been current but that was not. Finally, as we had noted in the draft report, the Bureau has already begun taking action to address errors we had identified in its master schedule. Since we sent a draft of this report to Commerce, the Bureau provided additional information on the status of that effort, and we have updated this report accordingly. We are sending copies of this report to the Secretary of Commerce, the Director of the U.S. Census Bureau, and interested congressional committees. The report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report please contact me at (202) 512- 2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. We reviewed the U.S. Census Bureau (Bureau) program’s schedule estimates and compared them with relevant best practices in GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Cost to determine the extent to which they reflect key practices that are fundamental to having a reliable schedule. These practices address whether the schedule is comprehensive, with logically sequenced activities spanning the scope of work to be performed so that the full picture is available to managers; current, with progress on ongoing activities updated regularly so that managers can readily know the status of the project; and controlled, with a documented process for changes to the schedule so that the integrity of the schedule is ensured. In doing so, we independently assessed a copy of the program’s integrated master schedule and its underlying schedules against our best practices. We also interviewed knowledgeable program officials to discuss their use of best practices in creating the program’s current schedule and we attended a schedule walk-through to better understand how the schedule was constructed and maintained. We tested the Bureau’s schedule data for reliability by running a schedule check report in Pertmaster which is a scheduling analysis software tool that identifies missing logic, constraints, and so forth; using the schedule information from Pertmaster, copying the schedule data into Excel, and checking for specific problems that could hinder the schedule’s ability to dynamically respond to changes; examining whether there were any open-ended activities (i.e., activities with no predecessors, successors, or both); searching for activities with poor logic; identifying whether there were any lags or leads that should only be used to show how two tasks interact and not to represent work; determining if activities were resource loaded, which helps to cost out the schedule and examine whether resources are overstretched or not available when needed; examining whether the schedule was baselined, when it had its status updated, and what deviations there were from the plan; and examining if there were any actual start or finish dates recorded in the future and whether there was any broken logic. In addition to the contact named above, Ty Mitchell, Assistant Director; Virginia Chanley; Vijay D’Souza; Jason Lee; Andrea Levine; Donna Miller; Crystal Robinson; Jessica Thomsen; Jonathon Ticehurst; and Katherine Wulff made key contributions to this report. 2010 Census: Census Bureau Continues to Make Progress in Mitigating Risks to a Successful Enumeration, but Still Faces Various Challenges. GAO-10-132T. Washington, D.C.: October 7, 2009. 2010 Census: Fundamental Building Blocks of a Successful Enumeration Face Challenges. GAO-09-430T. Washington, D.C.: March 5, 2009. Information Technology: Census Bureau Testing of 2010 Decennial Systems Can Be Strengthened. GAO-09-262. Washington, D.C.: March 5, 2009. Census 2010: Census Bureau’s Decision to Continue with Handheld Computers for Address Canvassing Makes Planning and Testing Critical. GAO-08-936. Washington, D.C.: July 31, 2008. Census 2010: Census at Critical Juncture for Implementing Risk Reduction Strategies. GAO-08-659T. Washington, D.C.: April 9, 2008. Information Technology: Census Bureau Needs to Improve Its Risk Management of Decennial Systems. GAO-08-79. Washington, D.C.: October 5, 2007. 2010 Census: Basic Design Has Potential, but Remaining Challenges Need Prompt Resolution. GAO-05-9. Washington, D.C.: January 12, 2005.
To carry out the decennial census, the U.S. Census Bureau (Bureau) conducts a sequence of thousands of activities and numerous operations. As requested, The Government Accountability Office (GAO) examined (1) the Bureau's use of scheduling tools to maintain and monitor progress and (2) the status of two systems key to field data collection: the control system the Bureau will use to manage the work flow for paper-based operations, including nonresponse follow-up, and the system used to manage quality control of two major field operations. GAO applied schedule analysis tools; reviewed Bureau evaluations, planning documents, and other documents on work flow management; and interviewed Bureau officials. The Bureau's master schedule provides a useful tool to gauge progress, identify and address potential problems, and promote accountability as the Bureau carries out the census. GAO found that the Bureau's use of its master schedule generally follows leading scheduling practices that enable such high-level oversight. However, errors GAO found in the Bureau's schedule hinder the Bureau's ability to identify the effects of activity delays and to plan for the unexpected. The Bureau has recently begun taking systematic steps to identify and correct remaining errors. However, within its schedule, the Bureau does not identify the resources needed to complete activities, making it difficult for the Bureau to evaluate the costs of schedule changes or the resource constraints that may occur at peak levels of activity. Leveraging the 2010 scheduling experience and including resource needs in the 2020 schedule should facilitate planning for the 2020 Census, already underway. The automated control system that the Bureau plans to use to help manage major field data collection operations has significant development and testing milestones remaining, with some scheduled to finish shortly before the system needs to be deployed. This aggressive schedule leaves little time for resolving problems that may arise, and without prioritized and final software specifications and reliable progress measures, the Bureau may not get what it needs from the system to conduct the operations. Additionally, development of quality control software for two major field operations faces delays, although detailed specifications and test plans are final.
Congress established the Smithsonian in 1846 to administer a large bequest left to the United States by James Smithson, an English scientist, for the purpose of establishing, in Washington, D.C., an institution “for the increase and diffusion of knowledge among men.” In accepting Smithson’s bequest on behalf of the nation, Congress pledged the “faith of the United States” to carry out the purpose of the trust. To that end, the act establishing the Smithsonian provided for the administration of the trust, independent of the government itself, by a Board of Regents and a Secretary, who were given broad discretion in the use of the trust funds. The Board of Regents currently consists of nine private citizens as well as members of all three branches of the federal government, including the Chief Justice of the United States, the Vice President, and six congressional members, three from the Senate, and three from the House of Representatives. Over the last 160 years, the Smithsonian’s facilities inventory has expanded to include 19 museums and galleries, 9 research centers, a zoo, and other facilities—most located in or near Washington, D.C. The major buildings owned by the Smithsonian range in age from about 160 years old to less than 1 year old, with most of the facilities’ growth occurring since the 1960s. (See figure 1.) The Smithsonian’s growth will continue with the construction of an aircraft restoration area—phase 2 of the National Air and Space Museum Steven F. Udvar-Hazy Center—and the design and construction of a National Museum of African American History and Culture, authorized by Congress in 2003. Beyond this, there has been Congressional interest in developing a National Museum of the American Latino. Although the Smithsonian is a trust instrumentality with a private endowment, it is largely funded by federal appropriations. In fiscal year 2006, the Smithsonian’s operating revenues were about $947 million, of which about 65 percent came from federal appropriations. The facilities capital appropriation, which was about $98.5 million in fiscal year 2006, provides funds for construction and revitalization projects. The salaries and expenses appropriation, which was about $516.6 million in fiscal year 2006, includes funding for the program activities of each museum and research center; rents; utilities; and facilities’ operations, maintenance, and security costs. The remaining operating revenues come from the Smithsonian’s private trust funds. These are of two types: Restricted trust funds—which made up 29 percent of the Smithsonian’s operating revenue in fiscal year 2006—include such items as gifts from individuals and corporations that specify the purpose of the funds. Restricted funds have been provided for some facilities’ construction projects and enhancements related to revitalization projects. Unrestricted trust funds—which made up 6 percent of the Smithsonian’s operating revenue in fiscal year 2006—include income from investment earnings and net proceeds from business activities, and can be used to support any Smithsonian activity. The Smithsonian typically has used unrestricted trust funds for fundraising, some salary costs, and central administration costs. Although the Smithsonian can use unrestricted trust funds for any purpose consistent with the Smithson Trust and therefore could use them for facilities revitalization and maintenance, it has not done so. Smithsonian officials stated that the unrestricted trust fund budget is small and that if these salary and central administration costs were not paid for with unrestricted trust funds, they would have to use federal funds or eliminate positions or programs to cover these expenses. With regard to real property management, the Smithsonian has made a number of facilities improvements since our 2005 report, but the continued deterioration of many facilities has caused access restrictions and threatened the collections, and the Smithsonian’s cost estimate for facilities projects has increased. The Smithsonian follows many key security practices to protect its assets but faces communication and funding challenges. The Smithsonian has taken steps to improve its real property portfolio management but faces challenges related to funding constraints and its capital plan. The Smithsonian improved the condition of a number of facilities since our 2005 report. For example, the Smithsonian completed its revitalization of the Donald W. Reynolds Center for American Art and Portraiture, which houses the Smithsonian American Art Museum and the National Portrait Gallery. The Smithsonian also completed the construction of Pod 5, a fire- code-compliant space, to store alcohol-preserved specimens of the National Museum of Natural History. Many of these specimens are currently stored within the museum building on the National Mall in Washington, D.C., in spaces that do not meet fire-code standards. Collections are scheduled to be moved to Pod 5 over the next 2 years. At the same time, problems with the Smithsonian’s facilities have resulted in additional access restrictions and damage and have continued to threaten collections and cause other problems, according to museum and facility directors: At the National Air and Space Museum, power capacity issues caused by inadequate electrical systems have forced the museum to occasionally close galleries to visitors. A lack of temperature and humidity control at storage facilities belonging to the National Air and Space Museum has caused corrosion to historic airplanes and increased the cost of restoring these items for exhibit. Chronic leaks in the roof of the Cultural Resources Center at Suitland, Maryland, which was completed in 1998 and opened in 1999 to hold collections of the National Museum of the American Indian, have forced staff to place plastic over several shelving units used to store collections, such as a set of wooden boats that includes an Eskimo kayak from Greenland and a rare Yahgan dugout canoe from Tierra del Fuego, according to officials at this facility (see fig. 2). The plastic sheeting limits visitors’ visual access to the boats during open houses, which provide Native Americans and other groups with access to the collections. Leaks in a skylight since 2005 have at times forced the National Museum of African Art to cover the skylight with plastic to protect the building and its collections (see fig. 3). Leaks in the National Zoological Park’s sea lion and seal pools as of July 2007 were causing an average daily water loss of 110,000 gallons, with a water replacement cost of $297,000 annually (see fig. 4). According to Smithsonian officials, repairs to some of these problems are scheduled to take place over the next several years. The Smithsonian’s cost estimate for facilities projects from fiscal year 2005 through fiscal year 2013 has increased since April 2005 from about $2.3 billion to about $2.5 billion for the same time period. According to Smithsonian officials, this estimate includes only costs for which the Smithsonian expects to receive federal funds, and it could increase further. According to Smithsonian officials, the increase in this cost estimate was due to several factors. For example, Smithsonian officials said that major increases had occurred in projects for the National Zoological Park and the National Museum of American History because the two facilities had recently developed master plans that identified additional requirements. In addition, according to Smithsonian officials, estimates for antiterrorism projects had increased due to adjustments for higher costs for security-related projects at the National Air and Space Museum, and the increase in the cost estimate also reflects the effect of delaying corrective work in terms of additional damage and escalation in construction costs. The Smithsonian follows key security practices to protect its assets, but it faces two key challenges, one related to ensuring that museum and facility directors are aware of important security information and the other related to funding constraints. The Smithsonian follows key security practices we have identified in prior work, such as allocating resources to manage risk by contracting for a risk assessment report. This report, which includes individual assessments for over 30 Smithsonian facilities, was completed in 2005. The Smithsonian performs risk assessments for its facilities every 3 to 5 years to determine the need for security enhancements. Despite these efforts, we found that nine museum and facility directors we spoke with were unaware of the contents of the Smithsonian’s risk assessment report. The Smithsonian’s Office of Protection Services (OPS) is responsible for operating programs for security management at Smithsonian facilities. However, some museum and facility directors’ lack of awareness of the risk assessment report limits their ability to work with OPS to identify, monitor, and respond to changes in the security of their facilities. Furthermore, some museum and facility directors cited an insufficient number of security officers to protect assets due to funding constraints. We found that the overall number of security officers had decreased since 2003, at a time when the Smithsonian’s square footage had increased. Some of the Smithsonian’s museum and facility directors said that in the absence of more security officers, some cases of vandalism and theft have occurred. In addition, two museum directors stated that it has become more difficult for them to acquire collections on loan because lenders have expressed concern with the lack of protection. In our September 2007 report, we recommended that the Smithsonian increase awareness of security issues. The Smithsonian concurred with this recommendation. Faced with deteriorating facilities and an increased cost estimate for facilities projects, the Smithsonian has taken steps to improve the management of its real property portfolio but faces challenges related to funding constraints and its capital plan. The Smithsonian’s centralized office for real property management, known as the Office of Facilities Engineering and Operations (OFEO), has made significant strides in several areas related to real property portfolio management, including improving real property data, developing performance metrics, and refining its capital planning process. At the same time, however, funding constraints have presented considerable challenges to OFEO’s efforts. For example, while a majority of museum and facility directors stated that OFEO does a good job of prioritizing and addressing problems with the amount of funds available, several museum and facility directors expressed frustration that projects at their facilities had been delayed. In addition, OFEO officials stated that a lack of sufficient funds for maintenance has limited their ability to optimally maintain equipment, leading to more expensive failures later on. The Smithsonian has omitted privately funded projects from its capital plan and its estimate of $2.5 billion for facilities projects through 2013, making it challenging for the Smithsonian and other stakeholders to comprehensively assess the funding and scope of facilities projects. In recent years, private funds have played an important role in funding some of the Smithsonian’s highest-priority construction and revitalization projects, making up 39 percent of the Smithsonian’s capital funds for facilities projects for fiscal years 2002 through 2007. Smithsonian officials noted that the majority of these private funds were donated for the construction of new facilities—namely, the National Museum of the American Indian and the National Air and Space Museum Steven F. Udvar- Hazy Center—and said there is no assurance that private funds would make up a similar percentage of the Smithsonian’s funds for capital projects in future years. However, other organizations we visited during our review include both private and public investments in their capital plans to inform their stakeholders about the scope of projects and the extent of such partnerships used to fund capital needs. As a result, our September 2007 report recommends that the Smithsonian include privately funded projects in its capital plan. The Smithsonian concurred with this recommendation. Funding constraints are clearly a common denominator with regard to the Smithsonian’s security and real property management, but while the Board of Regents has taken some steps to address our 2005 recommendation to develop a funding plan to address its facilities revitalization, construction, and maintenance needs, its evaluation of funding options has been limited. In September 2005, an ad-hoc Committee on Facilities Revitalization established by the Board of Regents reviewed nine funding options that had been prepared by Smithsonian management for addressing this estimated funding need. The nine options are briefly described in Table 1. After reviewing materials on these nine options prepared by Smithsonian management, the ad-hoc committee decided to request an additional $100 million annually in federal funds for facilities over its current appropriation for 10 years, starting in 2008, for a total of an additional $1 billion. To implement this recommendation, in September 2006, several members of the Board of Regents and the Secretary of the Smithsonian met with the President to discuss the issue of increased federal funding for the Smithsonian’s facilities. According to two members of the Board of Regents, this option was selected because the board believed that the revitalization, construction, and maintenance of Smithsonian facilities are federal responsibilities. According to Smithsonian officials, it is the position of the Smithsonian, based on an historical understanding, that the maintenance and revitalization of facilities are a federal responsibility. Smithsonian officials pointed out that as early as the 1850s, the federal government has provided appropriations to the Smithsonian for the care and presentation of objects belonging to the United States. The President’s fiscal year 2008 budget proposal included an increase of about $44 million over the Smithsonian’s fiscal year 2007 appropriation, far short of what the Smithsonian requested, and it is not clear how much of this proposed increase would be used to support facilities. Our analysis of the Smithsonian’s evaluations of the eight other funding options, including the potential benefits and drawbacks of each, showed that the evaluations were limited in that they did not always include a complete analysis, fully explain specific assumptions, or benchmark with other organizations—items crucial to determining each option’s potential viability. For example, the Smithsonian’s analysis of a general admission fee option included an adjustment of annual net gains to account for losses in revenue at restaurants and stores. However, the Smithsonian’s materials did not discuss whether other museums had experienced such losses after establishing admission fees. We spoke with six other museums and a zoological park that stated that instituting or increasing admission fees did not decrease the amount of money visitors spent in restaurants and stores. In addition, although several of the nine options were dismissed because independently the options would not generate the amount of revenue required to address the Smithsonian’s facilities projects, the evaluation did not consider the potential of combining options to generate more revenue. In our September 2007 report, we concluded that if the Smithsonian does not develop a viable strategy to address its growing cost estimate for facilities projects, its facilities and collections face increased risk, and the ability of the Smithsonian to meet its mission will likely decline. We therefore concluded that the Board of Regents’ stewardship role obligates it to consider providing more private funds to meet the funding requirements of its overall mission. We recommended that the Smithsonian Board of Regents perform a more comprehensive analysis of alternative funding strategies beyond principally using federal funds to support facilities and submit a report to Congress and the Office of Management and Budget (OMB) describing a funding strategy for current and future facilities needs. The Smithsonian concurred with this recommendation. Recently, the Smithsonian Board of Regents has taken some additional steps towards developing a funding plan for facilities’ projects. According to a Smithsonian official, at the Board of Regents’ November 19, 2007, meeting, the Chair of the Committee on Facilities Revitalization, which became a standing committee in June 2007, reported to the board on the committee’s activities. These activities included several meetings and conversations, including some with Smithsonian management, and the consideration of some new papers on funding options. The papers contained information on some previously identified options as well as on some new options. A Smithsonian official acknowledged, however, that these papers did not provide comprehensive analysis and that many were not significantly different from the previous materials. According to a Smithsonian official, the Smithsonian determined that it did not wish to spend resources further analyzing all options but instead will analyze those the board has decided to pursue. According to a Smithsonian official, at this November 19 meeting of the Board of Regents, the Regents concurred with a prioritized list of funding options that was presented by the committee. This list includes establishing a national campaign to raise private sector funds for Smithsonian programs and facilities, a request that Congress match funds raised in the national campaign with additional appropriations, and several other options. According to preliminary results of ongoing work, as of November 2007, the Board of Regents had largely implemented 12 of the Governance Committee’s 25 recommendations. The board had taken steps towards implementing the other 13 recommendations, including, among other things, arranging for the implementation of some recommendations to be studied further and establishing target dates for implementation that range from December 2007 to mid 2008. The 12 recommendations implemented by the board include, for example, more clearly defining the roles and responsibilities of Regents and regent committees, improving access between the board and key members of senior management, and strengthening some policies regarding conflicts of interest and executive expenses. The board is also conducting studies on whether changes to the size and composition of the board would improve governance, how to effectively engage the Smithsonian’s advisory boards, and executive compensation. Governance experts and others we interviewed stated that in general, the board appears to have taken some positive steps toward governance reform. However, according to the literature we reviewed and governance experts we interviewed, success will depend in part on how Regents embrace their new responsibilities and on their level of engagement, as good governance results from a board that consists of active and deeply engaged members. The board reports that it has largely implemented 12 of the 25 recommendations of the Governance Committee. Appendices II and III provide summaries of the implementation status of the Governance Committee and IRC recommendations. The following are descriptions of some of the key recommendations that have been implemented. Duties and responsibilities of Regents and regent committees have been clarified. Previously, the roles and responsibilities of Regents and regent committees were not clearly and explicitly stated. The Governance Committee found that without a formal job description, the role of a regent was subject to individual interpretation, and it determined that adopting a clear statement of regent duties and responsibilities would reaffirm that the board is the Smithsonian’s ultimate governing authority. Accordingly, the board has taken several actions to clarify these responsibilities, including 1) adopting specific written responsibilities and expectations for all Regents, including that all Regents should participate in committees; 2) clarifying the duties of the Chancellor (who by tradition is the Chief Justice) and creating a new board Chair position to play a leadership role in guiding the board in the exercise of its oversight functions; and 3) appointing new leadership for all committees. These changes are now being put into practice and it is therefore too soon to evaluate whether they will be effective in improving governance at the Smithsonian. Access of key management to the board and information available have been improved. Several of the recent governance problems reported at the Smithsonian have been attributed to the isolation of certain members of senior management from the board and the office of the former Secretary’s tight control of information available to the board. The board has taken a number of steps to address these issues, including 1) amending its bylaws to require the attendance of the General Counsel and Chief Financial Officer, or their designees, at all meetings of the board and relevant board committees, 2) strengthening the relationship between the Inspector General’s office and the board, and 3) establishing an independent Office of the Regents that is responsible for, among other things, setting the agenda for the board in concert with the Secretary and through consultation with Smithsonian museum directors and others. While we have not independently validated these changes to assess whether they will be effective in improving oversight, both senior management and the Board of Regents’ staff told us that communication between the Regents and senior management has improved. For example, the General Counsel and Chief Financial Officer both told us that they are now directly reporting to the Regents and are available at board meetings to discuss details and answer questions about information they bring to the Regents. Management policies have been strengthened. The Governance Committee found that previous policies regarding expense reimbursement (including travel) and conflict-of-interest policies were not well-defined, which contributed to the lack of oversight of certain practices of the former Secretary as well as the failure to actively manage apparent conflicts of interest at the Smithsonian. The board has clarified management policies on travel and expense reimbursement, and created new ones, such as prohibiting senior executives from serving on the boards of for-profit companies. We have not independently validated these changes to assess whether they will be effective in improving oversight, and it is not yet clear how these policies will improve the governance of the institution in practice. A compensation range to guide the search for a new secretary has been established. In response to concerns about the compensation of the former Secretary (which included, among other things, a housing allowance that some Regents were unaware of), the board contracted for a study to identify a compensation range to guide the search for a new secretary, with the goals of making the secretary’s compensation transparent and balancing the Smithsonian’s public trust status with the need to attract the best leader. According to a Board of Regents’ staff member, the study included benchmarking with about 30 comparable organizations. In October, the Board of Regents approved the range recommended by the study to be used in the recruitment process. We have not independently validated this recommended range. Several recommendations have not yet been fully implemented but are actively being considered and debated. For example, at the direction of the Regents, the Smithsonian is examining the extent to which Smithsonian Business Ventures (SBV), a centralized business entity responsible for the Smithsonian’s various business activities, should follow Smithsonian-wide policies for areas such as contracting and travel. Previously, SBV adopted its own policies and was not subject to all Smithsonian policies. The efforts underway are preliminary and final actions have not yet been taken, but the Regents report them as being on track toward implementation. In addition, in August 2007, the Acting Secretary established a task force to review the entirety of SBV and make recommendations on its governance, structure, and the role of revenue- generating activities within the Smithsonian. Those recommendations will be presented to the Acting Secretary by the end of the year and to the Regents at their January 2008 meeting. The Board of Regents is continuing to study other important issues related to improving governance at the Smithsonian. In particular, some observers have suggested that the size and composition of the board contributed to the lack of oversight of management practices, and in response, the Board of Regents, with the assistance of outside consultants, is evaluating potential structural changes to the board. The report is due in January 2008. Any changes to the size and composition of the board would require legislative action. Currently, the Board of Regents consists of 17 members—9 citizen Regents, 6 Congressional Regents, and 2 ex-officio Regents (the Vice President and the Chief Justice)—which is the average size for boards of nonprofit organizations. However, much of the work of the Board of Regents is conducted at the committee level, and in the past, not all Regents have served on committees, suggesting that in practice, the “working” size of the board has been somewhat smaller than 17. Nonetheless, based on our review of common non-profit governance practices, and according to governance experts and others we consulted, there is no “right” number of board members. A board that is small will have fewer members to serve on committees, whereas having too many board members can lead to increased difficulty in making decisions and can stifle the effectiveness of the board. Determining the appropriate size for a board entails balancing the need for a board that is of a manageable size with such things as ensuring the board has the expertise necessary to achieve its mission and achieving an appropriate diversity of values and perspectives among board members. Beyond the size and structure of the board, several governance experts we interviewed stressed that having board members who actively participate and are engaged is central to good governance, and some nonprofit organizations we met with stated that they focused on changing the governance culture at their organization. For example, representatives from one nonprofit organization we spoke with—which recently had similar issues related to executive compensation and expenses—stated that they have focused on creating a culture of accountability and transparency in the board’s activities. They told us that they did not change the size or structure of the board, but rather clarified roles and responsibilities, improved communication throughout the various divisions of the organization, and took other actions aimed at improving the accountability and transparency of the board. In order to address other governance recommendations, the Board of Regents has planned another longer term study, due in May 2008, aimed at establishing a stronger link between the board and the Smithsonian’s 30 advisory boards. These boards include the Smithsonian National Board as well as advisory boards that focus on individual museums or research centers. According to the Governance Committee, the advisory boards provide a key link between the Regents and the public and a direct connection to the museums. Based on preliminary findings of our ongoing work, the Regents generally have had limited interaction with the advisory boards, although the advisory boards serve important functions in the operations of the individual museums and other facilities across the Smithsonian. Preliminary results from our ongoing work indicate that several museum directors are concerned about the Regents’ level of interaction with advisory boards and most museum directors see additional value from having a more direct relationship between the Board of Regents and the various museums, research facilities, and other institutions within the Smithsonian. Governance experts and others we spoke with said that, in general, the board appears to have taken some positive steps toward governance reform. However, according to the literature we reviewed and governance experts we interviewed, success will depend in part on how Regents embrace their new responsibilities and on their level of engagement, as good governance results from a board that consists of active and deeply engaged members. In our ongoing work, we will continue to assess the Board of Regents’ governance changes and how the board is addressing long-term governance challenges facing the Smithsonian. We expect to report on these issues in 2008. Madam Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information on this testimony, please contact Mark L. Goldstein at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony include Brandon Haller, Carol Henn, Jennifer Kim, Margaret McDavid, Susan Michal-Smith, Amanda Miller, Sara Ann Moessbauer, Dave Sausville, Stanley Stenerson, Andrew Von Ah, and Alwynne Wilbur. We conducted our work for this testimony from October to December 2007 in accordance with generally accepted government auditing standards. Our testimony regarding the Smithsonian’s real property management is based on our past report on the Smithsonian’s facilities, including their condition, security, management, and funding, and information provided by Smithsonian officials on steps taken to develop a funding plan for facilities projects. Our testimony regarding preliminary results of our ongoing work on the Smithsonian’s governance changes is based on our review of Smithsonian and other documents, and interviews with Smithsonian Regents and officials and others. Specifically, we reviewed laws relating to the Smithsonian, the Independent Review Committee report, and the Smithsonian’s Governance Committee Report, and spoke to Smithsonian Regents and officials on their progress towards implementing governance recommendations. We also interviewed all Smithsonian museum directors. We conducted a literature search to help identify governance experts and organizations that had recently undergone governance reforms. We identified and interviewed ten specialists on nonprofit or museum governance, including academics and representatives of associations dedicated to nonprofit governance. This included four governance or museum experts who had advised or consulted with the Smithsonian during its governance review, as well as six that we identified through a literature search or were referred to us by other experts in the field. We also reviewed literature on nonprofit governance to identify common nonprofit governance practices, including literature from organizations such as the American Association of Museums, BoardSource, Council on Foundations, and Independent Sector. In addition, we met with several organizations that had characteristics similar to those of the Smithsonian and that had recently undergone governance reforms. We focused on organizations that had had similar governance problems, conducted a governance review, and changed their practice or structure; organizations that had a structure that consisted of a central or national governing body with multiple programming units; and organizations with similar missions and stewardship challenges. As of December 5, 2007, we had met with officials from American University, American National Red Cross, Getty Trust, National Trust for Historic Preservation, and United Way of America. In ongoing work, we are continuing to evaluate the Smithsonian Board of Regents’ governance reforms. Our objectives for this ongoing work include assessing (1) how governance changes being made by the Board of Regents address recent governance problems and how changes will be implemented and evaluated, and (2) how the Board of Regents is addressing other long-term governance challenges facing the Smithsonian, such as funding, strategic planning, facilities, collections and museum management, and what, if any, additional oversight activities would be beneficial to the board in achieving its mission. We are also continuing to interview recognized experts in nonprofit governance selected through the process described above to obtain their independent views on the Smithsonian’s governance problems and whether recent governance changes will address those problems; and we are conducting interviews and reviewing documents from organizations selected through the process described above that have recently changed their governance structure and practice. We expect to report on these issues in 2008. According to Smithsonian Institution (Smithsonian) officials, the Smithsonian Board of Regents has largely implemented 12 of the 25 recommendations made by the Board of Regents’ Governance Committee based on its internal study, and has taken steps towards implementing the other recommendations. The implementation of some of these recommendations is under further study by the board. Figure 5 provides a summary of the board’s efforts towards implementing these recommendations, as described by Smithsonian officials. The Smithsonian Institution (Smithsonian) Board of Regents has stated that the Governance Committee’s 25 recommendations generally encompass the recommendations made by the Independent Review Committee (IRC) as part of its study to address governance problems at the Smithsonian. As such, the Smithsonian is not tracking its implementation of these recommendations individually, except that it notes which IRC recommendations are relevant to each Governance Committee recommendation. Based on information provided to us by Smithsonian officials, the board has implemented 3 of the IRC’s 12 recommendations, and has taken steps towards implementing the others, with the exception of one recommendation (IRC recommendation number 12) that was not issued directly to the board. Several of these recommendations are being considered as part of ongoing studies undertaken by the board to address the Governance Committee recommendations. Figure 6 provides a summary of the board’s efforts towards implementing the IRC recommendations, as described by Smithsonian officials.
The Smithsonian Institution (Smithsonian) is the world's largest museum complex. Its funding comes from its own private trust fund assets and federal appropriations, with the majority of funds for facilities coming from federal appropriations. In 2005, GAO reported that the Smithsonian's current funding would not be sufficient to cover its estimated $2.3 billion in facilities projects through 2013 and recommended that the Smithsonian Board of Regents, its governing body, develop and implement a funding plan. Recently, problems related to a lack of adequate oversight of executive compensation and other issues have raised concerns about governance at the Smithsonian. This testimony discusses GAO's recently issued work on the Smithsonian's real property management efforts and its efforts to develop and implement strategies to fund its facilities projects. In addition, it describes preliminary results of GAO's ongoing work on the Smithsonian's governance challenges. The work for this testimony is based on GAO's September 2007 report, Smithsonian Institution: Funding Challenges Affect Facilities' Conditions and Security, Endangering Collections, which included recommendations. For ongoing governance work, GAO reviewed Smithsonian documents and interviewed Smithsonian officials, academics, and representatives of nonprofit associations. While the Smithsonian has made some improvements to its real property management, the continued deterioration of many Smithsonian facilities has caused problems, and the Smithsonian's real property management efforts face challenges. The deterioration of facilities has caused access restrictions and threatened collections. In addition, the Smithsonian's estimate for facilities projects increased to $2.5 billion. While the Smithsonian follows key security practices, communication of security information and funding constraints pose challenges. The Smithsonian has made significant strides in improving its real property portfolio management. However, the Smithsonian omitted privately funded projects from its capital plan, making it challenging to assess the total funding and scope of projects. GAO's September 2007 report recommended that the Smithsonian increase awareness of security issues and include privately funded projects in its capital plan. The Smithsonian concurred. To address GAO's 2005 recommendation that the Smithsonian develop a funding plan for facilities projects, the Board of Regents created an ad-hoc committee that reviewed nine options and chose to request increased federal funding. Some of the Smithsonian's evaluations of the nine funding options were limited in that they did not always provide complete analysis, fully explain assumptions, benchmark with other organizations, or consider combining options to increase revenue. GAO's September 2007 report recommended that the Smithsonian more comprehensively analyze funding options and report to Congress and the Office of Management and Budget on a funding strategy. The Smithsonian concurred. The Board of Regents recently established a prioritized list of funding options. Preliminary results of GAO's ongoing work on broader governance issues indicate that the Board of Regents has made some changes to strengthen governance, such as more clearly defining the Regents' oversight responsibilities and improving access between the board and key members of senior management. The board is also studying whether changes to its size and composition would strengthen governance. GAO's preliminary work suggests that the Board appears to have taken some positive steps toward governance reform, but that success will depend in part on how Regents embrace their new responsibilities and on their level of engagement.
The Robert T. Stafford Disaster Relief and Emergency Assistance Act (the Stafford Act) grants the principle authority for the President to provide assistance in mitigating, responding to, and preparing for disasters and emergencies, such as earthquakes, hurricanes, floods, tornadoes, and terrorist acts. FEMA administers the Stafford Act and provides direct housing assistance (e.g., travel trailers and manufactured homes) under its Individuals and Households Program. FEMA provides these units at no charge to disaster victims who cannot use financial assistance to rent alternate housing because such housing is not available. The Stafford Act limits direct housing assistance to an 18-month period, after which FEMA may charge rents at the fair market rent levels established by the Department of Housing and Urban Development (HUD), but the President can also extend the initial 18-month period because of extraordinary circumstances. According to FEMA guidance, manufactured housing and recreational vehicles are the two most common forms of temporary housing units (see fig. 1). Manufactured housing is factory-built housing designed for long- term residential use. The term “mobile home” is sometimes used to refer to manufactured homes. In addition, this type of housing must be located on sites that are not in a designated floodplain area. Recreational vehicles, which include park model and travel trailers, are designed for short-term use when no other options are available. Following a disaster, the units may be a short-term housing option for households wanting to remain on an existing property or nearby while permanent housing is being restored, but the terrain or lot size prevents deployment of manufactured housing. A park model, which is generally larger than a travel trailer, is built on a single chassis, mounted on wheels, and has 400-square feet or less of living space. FEMA can place temporary housing units on a private site or in a group site configuration. Private site: Temporary housing unit is placed on an individual’s private property if the site is feasible and the local authorities approve. The unit can also be placed on individual private property that is not owned by the applicant, if the owner allows FEMA to place the unit at no cost to the agency (see fig. 2). Group site: Temporary housing unit is placed at a site that FEMA has built to house multiple households. FEMA built these sites in open space locations, including parks, playgrounds, ball fields, and parking lots following Hurricane Katrina (see fig. 3). FEMA can also place units at a commercial manufactured housing or recreational vehicle park that already has utilities (water, electric, and sewer/septic) for existing lots. The park management must be willing to lease the lots to FEMA at a fair and reasonable cost for the area. According to FEMA, the agency’s policy is to use existing commercial parks whenever possible, rather than to build sites. FEMA placed temporary housing units on private sites for about 115,400 (80 percent) of the households that received direct housing assistance following Hurricanes Katrina and Rita. FEMA placed about 25,000 households that received such assistance in temporary housing units at group sites located across Alabama, Louisiana, Mississippi, and Texas. Figure 4 illustrates the geographic dispersion of these sites. Most of the households that FEMA placed in group sites following Hurricanes Katrina and Rita reported being predisaster renters. Figure 5 shows that about 72 percent of group site households in Louisiana and an even higher percentage of group site households in Mississippi (about 84 percent) reported being predisaster renters. In comparison, renters made up less than one-third of all households in both states prior to the hurricanes. Households living in FEMA group sites encountered a variety of challenges in transitioning to permanent housing. According to officials we contacted and reports we reviewed, many of the households that lived in group sites following Hurricanes Katrina and Rita had low incomes, were elderly, or had a disability. As a result, these households were likely to experience difficulties in finding and transitioning to permanent housing. FEMA expects disaster victims who receive housing assistance to take an active role in finding housing and rebuilding their lives. Specifically, FEMA requires households receiving this type of assistance to develop within a reasonable amount of time a plan for moving into permanent housing that is similar to their predisaster housing. However, according to some officials we contacted, households living in group sites were not able to plan their recovery and were likely to face difficulties in accessing aid from federal programs—a problem that was exacerbated by the disaster—because these households were the hardest to serve. According to these officials, these households generally required additional services or assistance to support their transition into permanent housing. Specifically, our prior work found that although the majority of heads of households reported being employed when they applied for FEMA assistance, approximately 65 percent reported earning less than $20,000. About one-fifth reported no income and some of these individuals were retired or had disabilities. As shown in figure 6, the reported average income of households on group sites in Louisiana and Mississippi was about $24,000 and $30,000, respectively, or less than one-half of the Louisiana state average and less than two-thirds of the Mississippi state average. According to FEMA, these limited means led to concerns among some households about moving out of the sites and finding housing that they could afford. Furthermore, some of these households could not afford either security deposits for a rental unit or furniture. FEMA also said that households facing these challenges may be more reluctant to find and pay for permanent housing. While FEMA does not update demographic data on households on group sites to reflect current employment status or income levels, agency officials stated that those who remained in the sites the longest were the hardest-to-serve people, including the unemployed, elderly, or persons with disabilities. In the following sections, we describe other challenges that households living in group sites may have likely faced in transitioning to permanent housing. Although these other challenges are not unique to group site households and affected disaster victims in the Gulf Coast region, many of these challenges would likely have a more acute impact on households living in group sites. According to several federal and state officials we contacted and reports we reviewed, one commonly cited challenge displaced households faced was finding affordable rental housing, since rents increased significantly following the storms in certain Gulf Coast metropolitan areas. For example, HUD’s fair market rent for a two-bedroom unit in the New Orleans-Metairie-Kenner metropolitan area increased from $676 to $1,030, or about 52 percent, between fiscal years 2005 and 2009 (see fig. 7). In addition, HUD’s fair market rent for a two-bedroom unit in the Gulfport- Biloxi metropolitan area increased from $592 to $844, or about 43 percent, over the same time period. Figure 7 also shows that the Beaumont-Port Arthur and Mobile metropolitan areas experienced relatively smaller increases in fair market rent between fiscal years 2005 and 2009 (about 22 and 20 percent, respectively). Rents did not increase as much as in Beaumont-Port Arthur as they did in New Orleans-Metairie-Kenner or Gulfport-Biloxi, because relatively high vacancy rates prior to fiscal year 2005 likely softened the effect of the permanent loss of rental units and temporary removal of rental units from the market following Hurricane Rita. In comparison, average rents in cities nationwide increased by about 12 percent from fiscal years 2005 through 2008 (the last year for which data are available), according to the Consumer Price Index. Two key factors that contributed to these higher rents were a decreased supply of affordable rental units and an increased demand for undamaged rental units. Specifically, according to estimates by FEMA, Hurricanes Katrina and Rita caused major or severe damage to 112,000 rental units across the Gulf Coast region. According to HUD, 75 percent of the damaged rental units were occupied by low-income households. An increased demand for rental units also contributed to rent increases. According to The Urban Institute, this demand was driven by construction workers who moved to the area to accelerate recovery and by displaced renters and homeowners who needed temporary rental units in the area while their homes were being repaired. FEMA staff working to assist households living in group sites cited additional difficulties that group site households faced in finding permanent housing following Hurricanes Katrina and Rita. For example, some households reported to FEMA that there was a lack of available affordable rental housing in areas where they wanted to remain, particularly in some small towns. Other households reported to FEMA that while they were able to find rental housing, the units were either not habitable or located in unstable or abandoned neighborhoods. Also affecting the limited supply of rental housing were the following two factors: the slow pace of rental housing construction under key federal programs and the decision by states to focus the majority of federal funds on repairing homeowner units, rather than rental units. The Low-Income Housing Tax Credit (LIHTC) program provides an incentive for the development of rental housing that is affordable to low-income households and has been a major source of such housing. State housing finance agencies (HFA) must award credits to developers of qualified projects, and developers either use the credits or sell them to investors to raise capital (i.e., equity). The equity raised by the tax credits reduces the need for debt financing, and, as a result, these properties can offer lower, more affordable rents. After the 2005 hurricanes, Congress passed the Gulf Opportunity Zone Act of 2005 (GO Zone), which temporarily increased the amount of allocated tax credits for the five states along the Gulf Coast by a total of about $330 million. We reported in July 2008 that although the Gulf Coast states had awarded nearly all of their GO Zone LIHTCs, few of the units funded by these credits were in service as of April 2008. Since that time, Louisiana and Mississippi, which received the largest amounts of GO Zone authority, have each placed additional units in service. However, neither state had placed more than 35 percent of planned units in service as of December 2008. While LIHTC-funded units are generally required to be placed in service within 2 years of credit allocation, Congress extended this requirement for units funded with GO Zone LIHTCs, which must be placed in service before January 1, 2011. According to HFA officials, the declining market value of tax credits has reduced the amount of equity developers receive from investors for each dollar in tax credit awarded. As a result, developers must seek additional funding sources to make up for the equity shortfall, contributing to significant delays in closings, according to state officials. Other issues that have impeded the timely development of LIHTC units include the need to address environmental issues and increases in the total costs to develop projects because of the high costs of labor, materials, insurance, and land. Much of the disaster assistance provided through HUD’s Community Development Block Grant (CDBG) program, which provides flexible relief and recovery grants to devastated communities, was targeted to homeowners, with a small percentage of program funds set aside for owners of rental properties. Between December 2005 and November 2007, Congress appropriated a total of $19.7 billion in disaster CDBG funds to states affected by the 2005 hurricanes, of which not less than $1 billion was designated to repair or replace the affordable rental housing stock, including public and HUD-assisted housing. Local and state officials exercise a great deal of discretion in determining the use of the funds under this program. Three states (Louisiana, Mississippi, and Texas) used most of the CDBG funds to implement homeowner assistance grant programs to help homeowners cover the gap between their available financial resources and the cost to repair and replace their damaged dwellings. For example, as of January 2009, Louisiana had targeted $10.5 billion in CDBG funds (out of the total $13.4 billion) to housing assistance programs, and, of this amount, the state targeted about $8.6 billion, or 86 percent, to the Road Home Program (the state’s Homeowner Assistance Program). In contrast, the state set aside about $1.3 billion, or 13 percent, of its housing allocation for programs that targeted rental housing. Furthermore, while about 7 percent of the Homeowner Assistance Program funds remained unexpended as of the beginning of 2009, 80 percent of the funds set aside for rental housing had not been spent. Public housing agencies have faced considerable challenges in obtaining funding for the recovery of public housing units. Public housing is an important source of affordable housing for low-income households in the Gulf Coast region. The Gulf Coast states experienced a decline in the number of available units as a result of the storms, especially in the New Orleans area. Prior to Hurricane Katrina, the Housing Authority of New Orleans managed over 7,000 units of public housing in 10 different developments. Hurricane Katrina damaged about 80 percent of these units (approximately 5,600 units). In the aftermath, HUD officials stated that the department did not have sufficient program funds to repair and rebuild these units, and that the public housing agencies did not have sufficient insurance to cover the costs. A large portion of households that were displaced by the Gulf Coast hurricanes were renters, and given the challenges faced in developing affordable rental housing with federal subsidies, concerns have been raised about differences in the treatment of homeowners and rental property owners. GAO is conducting a separate review to (1) identify the federal assistance for permanent housing that was provided to rental property owners and to homeowners affected by the Gulf Coast hurricanes, (2) examine the extent to which federally funded programs responded to the needs of rental property owners and homeowners, and (3) describe the differences in the challenges faced in utilizing federal assistance for permanent housing and the options to mitigate these challenges. According to many officials we contacted, another significant obstacle to building affordable rental housing was opposition to the development of such housing by local communities—a problem typically referred to as “not in my backyard” or “NIMBY.” Opposition by local residents and public officials to specific types of housing in their neighborhood or communities is a long-standing issue in the development of affordable housing. Communities typically resist the development of affordable rental housing because of concerns about potential adverse impact on property values and community characteristics. Such opposition can manifest itself in restrictive land-use and development regulations that add to the cost of housing or discourage the development of affordable housing altogether. During the period after the Gulf Coast hurricanes, some officials we contacted and reports we reviewed explained that local opposition had slowed and, in some instances, stopped the development of affordable rental housing. For example, a nonprofit organization had planned to use LIHTCs to build an apartment complex for low-income elderly households in New Orleans to replace a complex destroyed by the hurricanes. However, according to an official from a New Orleans nonprofit organization, the local government passed a resolution that prohibited LIHTC developments and also engaged in a land-use study at the site of the proposed development that appeared to be timed to terminate the project. A report on the status of Mississippi’s housing recovery efforts since the Gulf Coast hurricanes cited NIMBY as one of the key barriers to addressing the state’s projected shortfall in the number of affordable rental housing units that it had planned to restore under the LIHTC program. On the basis of our discussions with officials and review of reports, we found that disaster victims encountered other challenges in returning to permanent housing, including households living in group sites. First, several sources indicated that disaster victims who owned homes faced significant challenges in financing repairs. For example, according to a Department of Homeland Security (DHS) Office of Inspector General (OIG) report, a December 2007 survey of FEMA field staff in Louisiana indicated that homeowners faced financial obstacles, including insufficient insurance coverage and limited Road Home Program funding, in repairing their homes. Similarly, a 2008 study of the post-Katrina housing recovery in Louisiana found that nearly three-fourths of Road Home applicants would still face a gap between their rebuilding resources and the cost to rebuild, leaving them short of the resources needed to repair their dwellings. The DHS OIG report also found that high construction costs, competition for available contractors, and new disaster mitigation requirements compounded these financial problems. According to some sources, the longer time frames and increased construction costs to repair damaged dwellings also impacted landlords, which in turn increased housing costs for renters. A second commonly cited challenge that disaster victims faced in returning to permanent housing was significantly higher insurance premiums. According to a report from the Louisiana Housing Finance Agency, premiums for homeowners insurance escalated to as much as four times their pre-Katrina level for certain areas in Louisiana that were severely impacted by the storm, putting insurance out of reach for most low- and moderate-income households. According to some officials we contacted, some landlords passed the escalating costs of insurance to rental households through increased rents. In addition, some insurance companies suspended sales of new homeowner policies in all or parts of the Gulf Coast region following Hurricanes Katrina and Rita, making it increasingly difficult for households to obtain insurance coverage in these areas. Finally, many households faced challenges in finding full-time employment to support a return to permanent housing. Following Hurricane Katrina in late August 2005 and Hurricane Rita in September 2005, unemployment rates increased significantly across the Gulf Coast region. For example, the unemployment rate in the New Orleans-Metairie-Kenner metropolitan area increased from 4.9 percent in August 2005 to more than 15.2 percent in September 2005, and the unemployment rate remained above pre- Katrina levels until March 2006 (see fig. 8). In the Gulfport-Biloxi metropolitan area, the unemployment rate increase following the storm was more significant, since the rate increased from 5.8 percent in August 2005 to more than 23.2 percent in September 2005. Moreover, the unemployment rate remained above pre-Katrina levels for 1 year following the storm. In 2008, we reported that approximately 21 percent of those households living in group sites reported no source of employment, and that some of those households reported having a disability or being retired. While FEMA did not update data on group site residents to reflect current employment status, some state and FEMA officials we contacted said that those who remained in the sites the longest were those with limited income and limited choices to find stable employment, including the elderly and persons with disabilities. Similarly, according to an April 2007 survey of FEMA group sites in Louisiana, more than two-thirds of the respondents were unemployed, and most of these respondents were not looking for employment. Most of those respondents not looking for employment said they were disabled or had major health limitations. FEMA’s overall effectiveness in measuring its performance in closing group sites and transitioning households into permanent housing was limited. While FEMA made some efforts to measure its progress, its measures did not provide the information on program results that was needed to assess the agency’s performance in achieving its goal of “helping individuals and communities affected by federally declared disasters return to normal functioning quickly and efficiently.” Under the provisions of the Government Performance and Results Act of 1993 (GPRA), federal agencies are required to measure and report the performance of their programs. GPRA was designed to inform congressional and executive decision making by providing objective information on the relative efficiency and effectiveness of federal programs and spending. Previously, we have reported that for performance measures to be useful, they should be linked or aligned with program goals, cover the activities that an entity is expected to perform to support the program’s purpose, and have a measurable target. These measures can capture several aspects of performance, including activities, outputs, outcomes, and impact (see fig. 9). Based on our past work, federal agencies have faced challenges in identifying program goals and performance measures that go beyond summarizing program activities (e.g., the number of clients served) to distinguishing desired outcomes or results (e.g., improving economic self- sufficiency among clients served). As figure 9 shows, having measures that describe outcomes and impact helps describe the extent to which the program is effective in achieving its policy objectives. In the past, we have found that performance measures are an important results-oriented management tool that can enable managers to determine the extent to which desired outcomes are being achieved. Results-oriented measures further ensure that it is not the task itself being evaluated, but progress in achieving the intended outcome. FEMA’s performance measures for group sites are output measures that focus on the core program activity of closing group sites. But the measures do not provide the information on program results that is needed to assess the agency’s performance in achieving its goal of “helping individuals and communities affected by federally declared disasters return to normal functioning quickly and efficiently.” The Post-Katrina Act required that FEMA develop performance measures to help ensure that it provided timely and efficient housing assistance to individuals and households displaced by Hurricanes Katrina and Rita. In September 2007, FEMA began publicly reporting data on a weekly basis to provide information on the housing assistance that the agency provided, including at group sites. Specifically, FEMA reported general data on the aggregate number of households that moved out of travel trailers, park models, or mobile homes and into other types of FEMA housing assistance or that were no longer in FEMA’s program. However, these data do not provide information on whether households moved to permanent housing and are not reported by the specific type of site (e.g., group site). FEMA also reported data specific to group sites showing, for example, that FEMA provided temporary housing to 24,960 households, at one point, at these sites. These measures indicated that as of April 9, 2009, 577 households continued to live in group sites located in Louisiana and Mississippi. These measures describe program outputs—that is, information on the number of sites established, current number of sites, number of households that lived in group sites, and current number of households— but do not provide information on results, such as successfully moving households to permanent housing, or on qualitative factors, such as the timeliness or efficiency of the assistance FEMA provided at group sites. The difficulties experienced in closing group sites and transitioning households to permanent housing—as we have previously discussed— underscore the need to develop measures that describe how efficiently and effectively the program is addressing its goal of “helping individuals and communities affected by federally declared disasters return to normal functioning quickly and efficiently.” For example, one potential measure could capture information on the amount of time households live in group sites before returning to permanent housing, and FEMA could establish a numerical target that facilitates the future assessment of whether its overall goal and objective were achieved. Having such information can help identify potential problems in meeting program goals and could be used to make management decisions about resources needed and steps to be taken. In its annual performance plans, FEMA also reports the percentage of customers that are satisfied with its disaster assistance programs. Although this measure may be a useful overall metric for assessing agency efforts on the quality of assistance provided to program beneficiaries, it is of limited use in assessing the agency performance in operating group sites because it is not reported separately for assistance provided through group sites. In the absence of other performance indicators to measure the efficiency or effectiveness and numeric targets, it is not possible to determine whether the disaster assistance programs are achieving the program goal of “helping individuals and communities affected by federally declared disasters return to normal functioning quickly and efficiently.” According to FEMA officials, the agency has not developed results- oriented performance measures, in part, because of the uniqueness and unpredictable circumstances of each disaster. We recognize that the circumstances can vary significantly from one disaster to another, and that FEMA generally provides housing assistance in group sites as a last resort and following catastrophic disasters, such as Hurricanes Katrina and Rita. Nevertheless, FEMA could leverage its experiences and lessons learned from its responses to past major disasters to identify potential measures of the agency’s performance in closing group sites. Such measures could be modified as needed to reflect actual conditions and types of assistance deployed. In fact, FEMA has designed performance measures for other types of assistance that may vary from one disaster to another. Specifically, according to FEMA officials, the agency has developed some potential outcome measures for other activities (such as case management services). For example, FEMA reports on the number of households that have achieved their recovery plans and, therefore, no longer need case management. FEMA officials also told us that they recognized the importance of results- based measures and would like to develop them for measuring housing assistance provided at group sites. Furthermore, the National Disaster Housing Strategy recognizes that it is important to develop performance measures to achieve the agency’s national goals, and that feedback on performance will enable those involved in the national effort to assess progress, adopt best practices, and make course corrections. Nonetheless, FEMA has yet to specify whether and when it will develop outcome measures for group site assistance. Without performance measures that reflect program results and that are clearly linked to the agency’s goals, FEMA cannot demonstrate program results and progress in achieving intended policy objectives. Although not all disasters may require the use of group sites, future major disasters that involve protracted recovery efforts may have to rely on such sites to provide temporary housing. As the experience from the 2005 hurricanes show, there will be a strong demand for results-oriented measures on the part of Congress in fulfilling its oversight responsibilities and holding FEMA accountable for its performance. The Post-Katrina Act was enacted to address various shortcomings identified in the preparation for and response to Hurricane Katrina. Among other things, the Post-Katrina Act required the FEMA Administrator, in coordination with specified federal and nonfederal government agencies—including the American Red Cross, HUD, the National Advisory Council, and the National Council on Disability—to develop, coordinate, and maintain a national disaster housing strategy to help plan and protect the nation against future catastrophes. Among other things, FEMA was to outline the most efficient and cost-effective federal programs that will best meet the short- and long-term housing needs of individuals and households affected by a major disaster and describe plans for the operation of group sites provided to individuals and households. FEMA was to provide the strategy to Congress by July 1, 2007. On July 21, 2008, FEMA released a draft strategy, with a 60-day comment period. However, the draft strategy did not include seven annexes that were to describe, among other things, the agency’s plans for operating group sites. Instead, the draft included seven blank pages of annexes marked “Under Development.” On January 16, 2009, FEMA released the final version of the National Disaster Housing Strategy, with annexes attached containing the information that had been omitted from the draft strategy. The strategy states that it serves two purposes—to describe how the nation currently provides housing to those affected by disasters and, more importantly, to chart a new direction that disaster housing efforts must take to better meet the emerging needs of disaster victims and communities. The strategy includes a discussion of key principles, roles and responsibilities, current practices, and future directions for the three phases of disaster housing (sheltering, interim housing, and permanent housing). As we have previously mentioned, the Post-Katrina Act mandated that FEMA develop a disaster housing strategy, including plans for operating group sites. In earlier work, we identified certain key characteristics of effective national strategies and plans. For example, in 2007, we assessed the federal government’s preparedness to lead a response to an influenza pandemic and reported that effective national strategies and plans should contain certain key characteristics. Among these are the agencies responsible for implementing the strategy or plan, the roles of the lead and supporting agencies, and mechanisms for coordination among the agencies; the types of resources required—funding, staffing, and training—to effectively implement the strategy or plan and the means of acquiring these resources; and the constraints and challenges involved in implementing the strategy or plan. The Disaster Housing Community Site Operations Annex, which is one of seven attachments of the National Disaster Housing Strategy, states that FEMA is responsible for closing group sites and assisting households in transitioning to permanent housing, but it did not fully address these key characteristics of an effective national strategy. We previously reported that a national strategy should address which organizations would implement the strategy, their roles and responsibilities, and mechanisms for coordinating their efforts. The strategy should answer the fundamental questions about who is in charge, not only during times of crisis, but also during all phases of emergency management, as well as the organizations that will provide the overall framework for accountability and oversight. This characteristic entails identifying the specific federal agencies and offices involved and, where appropriate, the different sectors, such as state, local, and private. The National Disaster Housing Strategy’s Disaster Housing Community Site Operations Annex, which discusses the issue of closing group sites, partially addresses this characteristic. The annex contains information on FEMA’s roles and responsibilities for closing group sites and assisting households in transitioning to permanent housing. Specifically, it states that FEMA will assist with finding and matching rental resources to households living at these sites that were renting homes or apartments before the event and track the progress of repairs to damaged or destroyed homes owned by affected households. The annex also states that FEMA will provide access to local, state, and federal agencies that could help affected households with their unmet needs. However, the annex does not explain how other federal or state agencies will be involved in completing the tasks associated with transitioning a group site household to permanent housing and what mechanisms will be used to coordinate with these agencies in ensuring that victims can find a permanent housing unit. Furthermore, the annex does not reflect some of the experience that FEMA gained in responding to Hurricanes Katrina and Rita regarding coordinating with other agencies. For example, in response to widespread concerns about both the long periods that displaced households were living in group sites and the health issues associated with the trailers on those sites, FEMA developed the 2007 FEMA Gulf Coast Recovery Office Housing Action Plan, which states that the agency would work with HUD to identify households that were receiving HUD assistance prior to the 2005 hurricanes. The plan also states that FEMA would transition the remaining households living in group sites into HUD’s Disaster Housing Assistance Program (DHAP), which is a pilot federal housing assistance grant program that provides temporary rental assistance through local public housing agencies that are experienced in administering other federal housing assistance. According to the National Disaster Housing Strategy, HUD’s and FEMA’s experience with DHAP demonstrates that rental assistance administered through HUD’s existing network of public housing agencies is an effective way to meet the long-term housing needs of displaced families following a disaster. Nonetheless, the National Disaster Housing Strategy does not specify HUD’s role in transitioning households out of group sites and into permanent housing. An effective national strategy should identify and describe the sources and types of resources required, such as funding, staff, and training, to effectively implement the strategy. Guidance on the costs and resources needed helps implementing parties allocate resources according to priorities, track cost, and shift resources, as appropriate, among other competing demands. Furthermore, the National Disaster Housing Strategy itself states that effective strategies identify the means or resources to achieve the strategies’ goals. However, we found that neither the strategy itself nor the Disaster Housing Community Site Operations Annex contained these elements. Specifically, the documents do not address the cost of helping households transition to permanent housing, the staffing resources that would be needed to complete this task, the type of training that should be provided to staff assigned to this task, and the sources (e.g., HUD; FEMA; or other federal, state, local, or private agencies) of the resources necessary to achieving FEMA’s goal of closing group sites and transitioning households into permanent housing. Again, the annex does not reflect some of the experience that FEMA gained in responding to Hurricanes Katrina and Rita. For example, in response to these hurricanes, FEMA’s Mississippi and Louisiana Transitional Recovery Offices developed housing plans that discussed some of the resources needed to assist households with transitioning out of group sites and into permanent housing. The Louisiana Transitional Recovery Office’s housing plan’s staffing strategy was designed to create a more effective labor force and labor mix to meet specific needs of the disasters, including mobilizing more experienced individuals with targeted functional skills sets. Similarly, the Mississippi Transitional Recovery Office’s housing plan provides information on the number of staff available to help households transition to permanent housing and states that no additional staff will be needed to complete this task. Furthermore, both of these plans emphasize the importance of providing training to their staffs to successfully assist affected households transition to permanent housing. In contrast, the National Disaster Housing Strategy does not identify and describe the resources needed, including staffing and training, to effectively transition group site households into permanent housing. Finally, an effective strategy should reflect a clear description and understanding of the problems to be addressed, their causes, and operating environment. A disaster housing strategy should discuss the constraints and challenges involved in closing group sites in the aftermath of a catastrophic incident, such as potential shortages in available permanent housing, and anticipate solutions to these challenges. However, the National Disaster Housing Strategy does not describe or anticipate challenges associated with helping people find permanent housing after a catastrophic event. In the past, FEMA has recognized the need to do so in order to help households move out of group sites. For example, FEMA’s November 2007 Gulf Coast Recovery Office Housing Action Plan described the specific challenges involved in closing the sites that were established after Hurricanes Katrina and Rita and the mechanisms available to address these challenges. For example, the plan states that households that have been living in group sites would be reluctant to move to unfurnished rental units, and that FEMA was to work with voluntary or other governmental agencies to provide furniture to the households. According to FEMA officials, the annex and strategy did not include the characteristics that we have previously discussed because these documents were meant to provide an overarching framework of FEMA’s process. Furthermore, officials said that it was difficult to outline the specific resources needed and the particular challenges FEMA could face in closing group sites and assisting households with the transition into permanent housing, mainly because each disaster presents unique needs and challenges. We previously identified the need for documents supporting a key strategy or plan, such as an annex, to contain detailed and robust information on how these plans are going to be implemented. For example, in February 2006, we reported that although the National Response Plan—which was revised in March 2008 and is now known as the National Response Framework—envisions a proactive national response in the event of a catastrophe, the nation did not yet have the types of detailed plans needed to better delineate capabilities that might be required and how such assistance will be provided and coordinated. We agree that no national strategy can anticipate and specify the precise resources and responsibilities appropriate for every circumstance. Nonetheless, this does not preclude FEMA from identifying the range of resources and responses appropriate for most circumstances. FEMA could leverage its experiences and lessons learned from responses to past major disasters in order to anticipate the types of challenges that could arise and the resources needed to address them. In 2007, we reported that the resources of certain federal agencies were not fully addressed in the National Response Plan, and that this hampered the ability of FEMA to provide leadership in coordinating and integrating overall federal efforts associated with housing assistance. The absence of detailed information in the housing strategy and its Disaster Housing Community Site Operations Annex on the partnerships that FEMA needs to form, the resources it needs, and the mechanisms that FEMA is to use to address the challenges specific to a catastrophic disaster when closing group sites and transitioning households to permanent housing can lead to delays in helping disaster victims return to more stable and conventional living arrangements. Lack of such plans may have contributed to the fact that more than 3 years after Hurricanes Katrina and Rita, 348 households continued to live in group sites as of June 18, 2009. Although several temporary housing options could offer alternatives to travel trailers, FEMA’s National Disaster Housing Strategy does not identify alternatives to travel trailers or provide clear guidance on what other temporary housing options are available to states. In our discussions with officials and reports we reviewed, we identified various alternatives to travel trailers in group sites, many of which are already authorized under the emergency and temporary housing provisions of the Stafford Act that FEMA has used in recent disasters, including Hurricanes Katrina and Rita. FEMA’s National Disaster Housing Strategy does not assess alternatives to travel trailers because evaluations are ongoing, nor does it provide clear guidance on what other temporary housing options states should use instead of travel trailers while FEMA completes these assessments. Such assessments could be useful to states that are responsible for identifying and selecting temporary housing options after a major disaster. Alternatives to the use of travel trailers can be grouped into three broad categories of options, including (1) utilizing existing available housing, (2) repairing damaged rental housing, and (3) providing direct housing. Current FEMA programs utilize existing available housing through emergency and financial assistance under sections 403 and 408 of the Stafford Act. Under section 403, FEMA provides direct grants to state and local governments, which use the grants to provide emergency shelter to households displaced from their residences following major disasters. Emergency shelters can include hotels and apartment rentals. The Stafford Act does not impose specific time limits on section 403 assistance, and FEMA’s regulations generally restrict the amount of time to a maximum of 6 months. Although the purposes of emergency sheltering and temporary housing are different, according to several sources, when the availability of temporary housing options is limited, allowing households to remain in emergency shelters until they can move to more suitable temporary or permanent housing options may be preferable. Under section 408, FEMA has the authority to provide assistance for households to rent an apartment or other housing accommodations. Such assistance is also being provided through a pilot program modeled after HUD’s Housing Choice Voucher program, a rental subsidy program that serves more than 2 million low-income, elderly, and disabled households nationwide and is administered by local public housing agencies. In the summer of 2007, FEMA and HUD entered into an interagency agreement to pilot a federal housing assistance grant program, DHAP, to temporarily extend rental assistance for victims displaced by Hurricanes Katrina and Rita. The program is funded by FEMA, but is administered by selected public housing agencies that are currently administering a HUD-funded housing choice voucher program. In the fall of 2008, FEMA deployed a modified DHAP following Hurricanes Ike and Gustav. While DHAP is a pilot program, in the National Disaster Housing Strategy, FEMA recommended that Congress give HUD legislative authority to create a permanent DHAP-like program. According to the strategy, HUD’s and FEMA’s experience with the DHAP pilot demonstrated that rental assistance administered through HUD’s existing network of local public housing agencies is an effective way to meet the long-term housing needs of displaced families following a disaster. Citing HUD’s experience with rental assistance programs, some of the officials we contacted and reports we reviewed have found that temporary rental housing assistance should be modeled after HUD’s Housing Choice Voucher program. In particular, several of these sources noted HUD’s experience with its voucher program in responding to disaster victims displaced by the 1996 Northridge Earthquake in Los Angeles, California. Vouchers allowed households displaced by this disaster to live in existing rental apartments of their choice. One report cited that if this specific temporary housing option had been deployed after the 2005 Gulf Coast hurricanes, FEMA could have deployed fewer travel trailers. The choice and mobility that the housing voucher program has to offer to disaster victims and the help that the victims receive in locating rental housing were the reasons generally cited by the sources for using this type of program for providing temporary housing after a major disaster. However, this option is not currently authorized under the Stafford Act provisions. Because of the limited number of rental units available following a major disaster and the amount of time required to construct new rental housing, a vital component of quickly bringing disaster victims back to the area is to repair damaged rental properties. Helping rental property owners quickly make repairs to existing properties could increase the number of available rental units. In past disasters, FEMA has been reluctant to be directly involved in the rapid repair of damaged rental housing, partly because the agency does not view housing construction as part of its core mission. However, the extent of destruction to the housing stock following the Katrina and Rita disasters highlighted the need to increase the availability of rental housing. As a result, the Post-Katrina Act established a pilot program authorizing FEMA to repair rental housing located in areas covered by a major disaster. The rental pilot, known as the Individuals and Households Pilot Program, permits FEMA to enter into lease agreements with owners of multifamily rental properties and to repair damaged properties to meet federal housing quality standards. The repaired apartments are to be rented to displaced households for at least 18 months (or longer, if necessary). In response to the midwest floods and Hurricane Ike, in September and December 2008, FEMA implemented pilots in Iowa and Texas, respectively. Specifically, FEMA selected apartments in Cedar Rapids, Iowa, and within this property funded the repair of seven two-bedroom units and in Galveston, Texas, funded the repair of 32 units. FEMA’s authority for the pilot program expired at the end of 2008. In accordance with the act, FEMA was to evaluate the effectiveness of the program and to report its findings to Congress at the end of March 2009, including any recommendations to continue the pilot program or to make the program a permanent housing option. In May 2009, FEMA issued a report on the pilot program, which stated that additional analysis and recommendations on whether to make the program permanent would be provided at a later date. Some officials we contacted and reports we reviewed mentioned that the federal government needs to do more to rapidly repair existing rental housing damaged during a major disaster to increase the rental stock available to disaster victims in the immediate area. An official from a nonprofit organization we contacted viewed the rapid repair of damaged rental units as an effective way to help households transition back to permanent housing more quickly, potentially reducing the need for longer stays in temporary housing options, such as travel trailers in group sites, which are not meant to be a long-term option. When rental housing is unavailable, FEMA has traditionally provided direct housing assistance to households displaced by major disasters, as it did after Hurricanes Katrina and Rita. Such assistance has included trailers and manufactured housing units that can be placed on homeowners’ property or on group sites. Travel trailers had been an important means of providing temporary housing after major disasters because the magnitude of these events limits the effectiveness of other options. FEMA can provide such assistance under section 408 of the Stafford Act and may also provide housing units owned or subsidized by other federal agencies, such as HUD and the Department of Veterans Affairs (VA), through agreements with these agencies. Travel trailers as direct housing assistance have been a standard part of FEMA’s recovery operations in disasters prior to the 2005 hurricanes and were intended for short-term use, but safety concerns involving the travel trailers used after the 2005 disasters led FEMA to change its policy. The agency’s 2008 disaster housing plan and the National Disaster Housing Strategy indicate that FEMA will no longer use group sites for the placement of travel trailers. Under current policies, FEMA will authorize the use of travel trailers only upon the request of the affected state when no other form of temporary housing is available. FEMA will also impose other restrictions on travel trailers, including that they be used only on private sites for no longer than 6 months and only after the state has determined that the trailers meet acceptable formaldehyde levels. In 2008, FEMA developed new performance specification requirements for all future temporary housing units purchased, including travel trailers, to eliminate the use of materials that emit formaldehyde. Finally, FEMA will continue to authorize group sites as a last resort for the placement of manufactured housing units. Although FEMA’s policy restricts trailers on group sites, several sources agreed that FEMA should use travel trailers or trailers on group sites as a last resort and only for a short period of time. Lots where these sites are located should be small and close to the displaced victims’ communities, with access to needed services. Utilizing government-owned or subsidized housing following a major disaster is another possible alternative, but this form of assistance tends to play a supportive role to other temporary housing options, since the number of units that could be utilized in a disaster tends to be relatively small. Under the Stafford Act, FEMA will enter into an agreement with other federal agencies, such as the U.S. Department of Agriculture, HUD, and VA, that own or subsidize property that could be used to provide temporary housing to disaster victims. For example, in response to Hurricane Katrina, about 10,000 federally owned or subsidized units were used to house disaster victims, including 5,600 HUD-owned single-family properties. According to FEMA, it encountered difficulties verifying that housing units offered by support agencies after Hurricane Katrina were indeed available for disaster victims. The National Disaster Housing Strategy indicates that since Katrina, the federal government has made some progress in cataloging available housing inventory through a number of online databases, potentially making it easier for FEMA to identify available units following a disaster. Temporary housing options involve trade-offs that policymakers should consider in providing temporary housing assistance. The limitations involved in these trade-offs are magnified during a major disaster—for example, when much of the existing housing stock is severely damaged or destroyed and recovery efforts take years to complete. FEMA’s National Disaster Housing Strategy points to several key factors that should be considered when assessing the relative efficiency and effectiveness of temporary housing options, such as total cost and deployment time. We identified three key factors that we used to assess how trailers in group sites compared with possible alternative temporary housing options: cost, availability, and suitability. Cost involves the total cost to the government for purchasing, installing, maintaining, and (if applicable) deactivating the housing unit over the period of use. Based on information presented in a 2008 DHS OIG report, the average unit cost for trailers in group sites ranged from about $75,000 to $84,000, depending on whether FEMA purchases units that have to be manufactured or units that already exist. Based on reports we reviewed, utilizing existing rental housing is generally considered to be a cost- effective approach for providing housing assistance, and, according to FEMA, it is less costly when compared with trailers in group sites. The principal cost to the government of existing housing is the monthly rents, which, under the section 408 program, are based on the fair market rent- level established by HUD. According to several sources, when compared with trailers, repairing damaged housing could cost less, and furthermore the benefits of repairs would be realized over a longer period of time. In a May 2009 report, FEMA estimated that completing rapid repairs and making monthly operating payments to two sites in Iowa and Texas were substantially less expensive than deploying and operating manufactured units over a similar period of time. Determining whether temporary housing options are available after a disaster occurs is a key consideration in assessing the viability of the options. Although utilizing existing housing is generally FEMA’s preferred way of providing temporary housing after a major disaster, there may not be sufficient housing available in the affected area to house displaced victims. At the same time, although disaster victims could be relocated to areas outside of the disaster area, FEMA officials said that victims generally prefer to remain near the affected area. Another obstacle that affects the availability of utilizing existing housing is the willingness of landlords to participate in the program. No information is available on the time required to repair damaged housing, and the current pilot program is not permanently authorized and may not be available in future disasters. If authorized, rental repair programs could potentially be deployed quickly, provided that funding was available and property owners were willing to participate. As we have previously stated, FEMA will no longer place travel trailers on group sites following a major disaster. However, the extent to which FEMA will still use travel trailers in other sites and the availability of trailers is unclear. Specifically, while the strategy and FEMA policy state that trailers will be used as a last resort when other temporary housing options are unavailable, a recent report by the Senate Ad Hoc Subcommittee on Disaster Recovery included an acknowledgment by FEMA officials that the agency will continue to use trailers in large numbers in responding to temporary housing needs following a catastrophic disaster. One FEMA official also acknowledged that the agency did not currently have sufficient housing resources to meet the demands of a large-scale event. Although FEMA awarded four contracts in April 2009 for the manufacture of low-emission travel trailers, the number of units contracted may not be sufficient to address housing needs after a major disaster, based on the number of units that were required in the Gulf Coast after the 2005 hurricanes. Temporary housing options must also meet the needs of affected households, including proximity to work and access to health and social services. Existing housing generally provides the households with a choice of housing units that meet their needs and generally allows for longer stays. Furthermore, as it does with the DHAP program, FEMA could use existing administrative networks (such as public housing agencies) to help find suitable housing. When sufficient existing housing is not available, rapid repair of damaged rental housing offers some of the same advantages of using existing housing, including the possibility of longer stays. In terms of suitability, trailers in group sites are the least-preferred option. Concerns about trailers in group sites after the 2005 hurricanes often focused on the long-term use of this option in sites that were isolated and lacked access to needed services. Although FEMA plans not to use trailers in group sites, several sources stated that these trailers are most suitable when they are used for a short period of time in proximity to the victims’ communities, allow for access to needed services, and do not pose health and safety risks to the occupants. While the temporary housing options discussed in this report can serve as possible alternatives to travel trailers in group sites, several of the officials we contacted and reports we reviewed agreed that no single alternative was best suited to providing temporary housing after a major disaster. According to some of these sources, officials should consider a mix of housing options that are determined to be most efficient, effective, and specific to the circumstances of the disaster. FEMA’s National Disaster Housing Strategy does not assess alternatives to trailers because evaluations are ongoing, nor does it provide clear guidance on what other temporary housing options states should use instead of trailers while FEMA completes these assessments. Such assessments could be useful to states that are responsible for identifying and selecting temporary housing options after a major disaster. In accordance with the Post-Katrina Act and as part of the strategy, FEMA was to identify the most efficient and cost-effective federal programs for meeting the short- and long-term housing needs of households affected by a major disaster. In describing these programs in the strategy, FEMA identified currently available options for providing temporary housing after a major disaster under the housing assistance provision of FEMA’s section 408 program, such as rental assistance to disaster victims in existing privately owned rental properties and temporary housing units, such as mobile homes; described a number of factors that were relevant in selecting and deploying temporary housing options, including relative costs, implementation time, and program funding levels; and provided a broad framework of how states were to consider these factors in selecting specific temporary housing options—for example, FEMA characterized the section 408 rental assistance provision as more efficient as long as rental housing was available and the direct assistance provision as less efficient due to the time needed to activate units, such as mobile homes. The strategy describes ongoing initiatives that FEMA has undertaken since Hurricanes Katrina and Rita to identify alternative forms of temporary housing. These initiatives include the Alternative Housing Pilot Program (AHPP), which was created in 2006 to identify, implement, and evaluate disaster housing alternatives to travel trailers. According to FEMA officials, the evaluation process will continue through 2011, at which time FEMA will issue a final report to Congress. FEMA also established in 2006 the Joint Housing Solutions Group (JHSG) to identify, among other things, viable alternatives to travel trailers and manufactured homes by working with manufacturers of these units. FEMA has not established an estimated completion date for this effort. The strategy is unclear regarding when travel trailers could be used following a major disaster or what other temporary housing options states should use instead of trailers while FEMA completes its assessments. Specifically, the strategy indicates that travel trailers will continue to be used as a last resort; however, it does not describe the specific conditions where trailers would be a viable option or those situations where trailers should not be used. In addition, the strategy does not recommend an option (or options) that would replace trailers and would be deployable on the scale needed to respond to a major disaster while it considers alternatives to trailers. In its March 2008 report, DHS OIG also raised concerns about how FEMA plans to temporarily house disaster victims for future catastrophic events. According to the OIG, FEMA needs to develop and test new and innovative catastrophic disaster housing plans to deal with the large-scale displacement of households for extended periods of time. In addition, in its February 2009 report on the federal government’s disaster housing response after Hurricanes Katrina and Rita, the Senate Ad Hoc Subcommittee on Disaster Recovery concluded that According FEMA has not planned sufficiently to replace travel trailers. the report, FEMA does not offer a substitute for mass trailers when othe forms of temporary housing are unavailable, as can happen after major disasters. Not only did the January 2009 strategy not specify what other temporary housing options states should use instead of trailers, prior FEMA guidance also did not communicate clearly to states and others on the use of trail ers in future disasters. Since Hurricanes Katrina and Rita, FEMA’s policies have been inconsistent regarding the use of travel trailers. For example, FEMA issued interim guidance in July 2007 that temporarily su use of travel trailers while the agency worked with health and environmental experts to assess air quality and health-related concerns. On the basis of the preliminary results of this assessment, FEMA’s rev guidance in March 2008 stated “it will not deploy travel trailers” as a temporary housing option. A month later, FEMA’s Administrator told ised Congress that the agency was never going to use travel trailers again, yet 2 months later FEMA changed its policy to allow limited use of travel trailers. According to that guidance issued in June 2008, trailers would remain an option upon a state’s request in extraordinary disaster conditions when no other form of temporary housing is available. The guidance also indicated that FEMA would no longer enter into contracts for the manufacture of travel trailers. However, FEMA awarded four contracts in April 2009 for the manufacture of low-emission travel trailers. Given all of the changes in guidance on the use of trailers since the Gulf s Coast hurricanes, FEMA did not ensure the strategy clarified its policie and provided sufficient details so that states understand the extent to which trailers (as well as other options) are available and practicable for future disasters. Officials from Texas and Louisiana with whom we spok also agreed that the strategy did not clearly describe the circumstances under which temporary housing options could be used in responding to the needs of disaster victims and did not identify alternatives for options that could not be used. Louisiana officials, for example, told us that strategy provided a good overview on the categories of assistance available to states following a major disaster. However, these description lacked information on the specific situation or circumstance that would “trigger” when a particular option could be used, according to the official Furthermore, the officials noted that reference in the strategy regarding the options being currently available to meet the needs of disaster vic was misleading for some of the options described. In particular, the officials did not believe that the use of innovative forms of temporary housing should have been included as a current practice for housing disaster victims following major disasters because new options, including alternative disasters. FEMA began reporting basic performance measures about closing group sites in the Gulf Coast region after the 2005 hurricanes, but these measures did not provide information on the effectiveness of the program in meeting its goals. As we have previously reported, it is important for federal agencies to identify performance measures that go beyond summarizing program activities. We have found that performance measures focused on results are most effective in assessing the achievement of policy objectives. FEMA officials agree that developing measures that focus on results is critical, and, with the establishment of the National Disaster Housing Strategy, FEMA will have an opportunity to develop such measures consistent with the strategy in future disasters. We recognize that each disaster presents its own unique set of challenges, but FEMA can leverage its experiences and lessons learned from its responses to past major disasters to identify a range of potential measures of the agency’s performance in closing group sites and assisting households with transitioning to permanent housing. Furthermore, the agency can modify such measures as needed to reflect the realities of future disasters. Having results-oriented measures, such as the amount of time that households live in group sites before returning to permanent housing, and developing numerical targets can help identify potential problems in meeting program goals and could be used to make decisions about resources needed and actions to be taken. Without measures that reflect program results and clearly link to the agency’s goals, FEMA will not be able to demonstrate program results and progress in achieving its intended objectives. The completion of the National Disaster Housing Strategy and the Disaster Housing Community Site Operations Annex is an important step in the agency’s efforts to more clearly describe its roles and responsibilities for closing group sites and assisting households with the transition into permanent housing. However, these documents lack several key characteristics for an effective strategy and plan. As a result, their usefulness as a management tool for ensuring that FEMA meets its goal of helping households find safe and suitable permanent housing after a disaster is limited. For example, because the strategy and the annex do not address the roles and responsibilities of other federal and state agencies in closing group sites and transitioning households into permanent housing, stakeholders and the public may not have a full understanding of their role and responsibilities. Furthermore, because these documents did not address the resources to assist households living in group sites transition into permanent housing, it is unclear what resources are needed to build capacity and whether they would be available. Finally, because these documents did not describe or anticipate challenges associated with helping people find permanent housing after a catastrophic event, delays could occur in helping disaster victims return to more stable and conventional living arrangements. Opportunities exist to improve the usefulness of these documents, especially the annex, because FEMA views them as evolving documents that are to be updated on a regular basis to reflect ongoing policy decisions. Historically, FEMA has relied on travel trailers to provide temporary housing to displaced households, especially after a major disaster when other temporary housing options (such as existing rental housing) are not sufficient. The use of these trailers has received significant criticism after the 2005 hurricanes due to safety and health issues as well as suitability for long-term use. While FEMA has changed its policy, it has made little progress in issuing or providing clear and consistent guidance on when travel trailers should be deployed following major disasters. Furthermore, while FEMA has initiated various assessments to identify potential temporary housing options that retain many of the conveniences of trailers but are safer and more suitable to the occupants, the lack of specific information on the interim alternatives to travel trailers will impede decision making by the states and places disaster victims at risk of not receiving temporary housing assistance as quickly as possible following a major disaster. To ensure that Congress and others have accurate information about the performance of Federal Emergency Management Agency’s direct housing assistance in group sites, we are making three recommendations to the Secretary of the Department of Homeland Security to direct FEMA to develop performance measures and targets that the agency will use for reporting on the results of closing group sites and assisting households with transitioning to permanent housing, and ensure that these measures are clearly linked with FEMA’s goals for disaster assistance. In addition, because of the multiple agencies with which FEMA must coordinate in delivering temporary housing assistance, we recommend that the Secretary of Homeland Security direct FEMA to take the following actions: Update its planning documents (e.g., the Disaster Housing Community Site Operations Annex of the National Disaster Housing Strategy) to describe how it will work with other agencies in closing group sites and transitioning households into permanent housing, what resources it needs to perform these activities, and how it will deal with specific challenges of a major disaster, such as potential shortages in available permanent housing. Describe clearly in its guidance to states how trailers or other options identified by the states can be deployed when other preferred housing options, such as existing rental housing, are not sufficient after a major disaster. We provided a draft of this report to the Department of Homeland Security’s Federal Emergency Management Agency for its review and comment. We received written comments from the Secretary of the Department of Homeland Security, which are reprinted in appendix II. The agency also provided a technical comment, which we incorporated into the report. FEMA generally agreed with our recommendations and is planning to take steps to address them. Specifically, FEMA intends to work through the National Disaster Housing Task Force to establish standard performance measures and reporting methods for all aspects of its direct assistance program, including group sites. FEMA also intends to work through the task force to address interagency operational issues. Although FEMA indicated that the strategy, including its annexes, will be updated as needed, it did not specifically discuss (1) whether these particular or other planning documents will describe how FEMA will work with other agencies in closing group sites and transitioning households into permanent housing; (2) what resources it needs to perform these activities; and (3) how it will deal with specific challenges of a major disaster, such as potential shortages in available permanent housing. We continue to believe that FEMA should update its planning documents to include these key characteristics of effective strategies and plans. Finally, FEMA said that the agency has been working to develop guidance for Joint Field Offices and the states on formally requesting and approving the use of temporary housing assistance programs following a disaster, including direct assistance. According to FEMA, the agency intends to clearly describe this process in the National Disaster Housing Concept of Operations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of the Department of Homeland Security, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of this report were to examine (1) challenges that households living in group sites faced in transitioning to permanent housing; (2) the extent to which the Federal Emergency Management Agency (FEMA) effectively measured its performance in closing group sites and assisting households with transitioning into permanent housing; (3) the National Disaster Housing Strategy’s effectiveness in defining FEMA’s roles and responsibilities for closing group sites and assisting households with transitioning to permanent housing; and (4) the alternatives to travel trailers in group sites when providing temporary housing after major disasters, how they compare with respect to identified policy factors, and how well FEMA’s National Disaster Housing Strategy assessed these alternatives. Our review focused on FEMA’s programs for temporary housing in Alabama, Louisiana, Mississippi, and Texas, including the use of group sites in the aftermath of Hurricanes Katrina and Rita. For the purposes of this report, the term “group sites” refers to both sites established by FEMA and commercial sites that already existed and were used to house hurricane victims. For all four objectives, we interviewed officials from FEMA’s Disaster Assistance Directorate, Individual Assistance Branch, Office of Policy and Program Analysis, Office of the Federal Coordinator for Gulf Coast Rebuilding, Gulf Coast Recovery Office (GCRO), and Recovery Division. We also interviewed state officials from the Louisiana Recovery Authority, the Mississippi Governor’s Office of Recovery and Renewal, and the Texas Department of Housing and Community Affairs. To identify challenges that households living in group sites faced transitioning to permanent housing, we examined reports related to the federal government’s response to Hurricanes Katrina and Rita and its efforts to provide housing assistance in group sites. Specifically, we reviewed relevant reports, including reports from the Department of Homeland Security’s (DHS) Office of Inspector General (OIG), Louisiana Family Recovery Corps, The Brookings Institution, RAND Gulf States Policy Institution, PolicyLink, Congressional Research Service, and GAO. In addition to interviewing FEMA officials and officials from the state agencies that we have previously mentioned, we conducted site visits to Baton Rouge and New Orleans, Louisiana, where we met with officials from the following selected local housing agencies and not-for-profit organizations to obtain their perspectives on the challenges that households living in group sites faced: Jefferson Parish Housing Authority Housing Authority of East Baton Rouge Housing Authority of New Orleans Louisiana Housing Finance Agency New Orleans Office of Recovery and Development Administration Louisiana Family Recovery Corp Catholic Charities Greater New Orleans Fair Housing Action Center Louisiana Justice Institute We also visited three group sites, including Renaissance Village—the largest group site established. To corroborate some of the challenges mentioned during our interviews, we analyzed several data sources. Specifically, to determine the extent to which Hurricanes Katrina and Rita had an impact on rents in these areas, we analyzed data from the Department of Housing and Urban Development (HUD) on the fair market rents for two-bedroom units in the Beaumont-Port Arthur, Texas, metropolitan statistical area (MSA); Gulfport-Biloxi, Mississippi, MSA; Mobile, Alabama, MSA, and New Orleans-Metairie-Kenner, Louisiana, MSA, from fiscal years 2005 to 2009. Furthermore, to determine the change in unemployment rates in the selected MSAs following Hurricanes Katrina and Rita, we analyzed annual unemployment rates data from the Department of Labor’s Bureau of Labor Statistics from fiscal years 2004 to 2007. In addition, we collected and analyzed data from FEMA to determine the average reported income for households living in group sites in Louisiana and Mississippi. We focused on group sites in Louisiana and Mississippi for this analysis because FEMA established most sites in these states. Specifically, we obtained information from two of FEMA’s databases—the FEMA Response and Recovery Applicant Tracking System (FRRATS) and the National Emergency Management Information System (NEMIS). FRRATS data are collected through FEMA field offices. Information obtained from FRRATS included receipts for the purchase of travel trailers and data on the type of site and the state where the trailer or mobile home was located. NEMIS data are collected through the national FEMA office. Information obtained from NEMIS included date of birth, age, income of those receiving housing assistance, owner or renter status, and former and current addresses. Both FRRATS and NEMIS contain a unique registration ID that we used to match the data we collected from these databases. We have tested the reliability of these data as part of a previous study and found the data to be reliable. We determined that the data provided were sufficiently reliable for the purposes of this report. However, it is important to note that the demographic data in NEMIS are largely self- reported by applicants, and FEMA does not independently verify all of the data it collects. As an example, while some of FEMA’s assistance programs are based on income, the incomes reported in NEMIS are not verified. Our analysis was based on the highest income reported by an individual. Also, our analysis was limited to individuals who provided the information, and we did not determine whether nonrespondents were likely to differ from those who responded. To assess the extent to which FEMA effectively measured its performance in providing housing assistance in group sites, we reviewed FEMA’s strategic plan and DHS’s annual performance report and other documents related to the measures that FEMA developed to assess its performance. To identify the measures that FEMA developed to track the number of group sites it used after Hurricanes Katrina and Rita and the number of households that lived in those sites, we examined FEMA’s GCRO Individual Assistance Global Report Executive Summary weekly reports. We determined that these reports were sufficiently reliable for the purposes of our report. Finally, we assessed FEMA’s measures against criteria for effective performance measures described in our prior work. To determine the National Disaster Housing Strategy’s effectiveness in defining FEMA’s roles and responsibilities for closing group sites and assisting households with transitioning to permanent housing, we reviewed the strategy and supporting annexes as well as federal emergency plans, including the National Response Framework and supporting annexes and the 2008 Disaster Housing Plan. Furthermore, we reviewed relevant sections of major statutes, regulations, and plans to better understand FEMA’s roles and responsibilities for closing group sites and assisting households with transitioning into permanent housing. Specifically, our review included the Robert T. Stafford Disaster Relief and Emergency Assistance Act of 1974 (Stafford Act)—as amended—and the Post-Katrina Emergency Management Reform Act (Post-Katrina Act). Additionally, we drew upon our extensive body of work on the federal government’s response to Hurricanes Katrina and Rita, as well as our prior work on pandemic influenza, to compare the relevant sections of the National Disaster Housing Strategy with the characteristics of an effective national strategy. Specifically, we assessed the extent to which the strategy and the Disaster Housing Community Site Operations Annex addressed certain desirable characteristics and the related elements of these characteristics developed in previous GAO work. Because we were not assessing the effectiveness of the entire National Disaster Housing Strategy and supporting annexes, we focused on three characteristics identified in previous work: organizational roles, responsibilities, and coordination; problem definition and risk assessment (i.e., challenges and constraints); and resources, investments, and risk management. Finally, we reviewed reports issued by Congress, DHS’s OIG, and the Congressional Research Service. To determine the alternatives to travel trailers in group sites and examine how they aligned with identified policy factors, we reviewed the Stafford Act, the Post-Katrina Act, and other related legislation. We also reviewed our previous reports and relevant literature, including reports from Congress, DHS’s OIG, and the Congressional Research Service and academic reports. In addition, we interviewed officials from FEMA, state housing agencies in the Gulf Coast region, and selected nonprofit and housing research groups. We reviewed the National Disaster Housing Strategy to determine how well it assessed the capacity of available temporary housing options to respond to the housing needs of individuals displaced by a major disaster on the basis of certain factors, such as cost- effectiveness and efficiency. We also interviewed officials from the previously mentioned state agencies to obtain their perspective on the extent to which FEMA provided sufficient information on the factors that should be considered when selecting an interim housing approach in response to a disaster. We conducted this performance audit from January 2008 through August 2009 in Atlanta, Chicago, Louisiana, and Washington, D.C., in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Daniel Garcia-Diaz, Assistant Director; Emily Chalmers; Marshall Hamlett; John McGrail; Marc Molino; Josephine Perez; and Rose Schuville made key contributions to this report.
Concerns over the Department of Homeland Security's (DHS) Federal Emergency Management Agency's (FEMA) provision of temporary housing assistance, including travel trailers at group sites, after the 2005 hurricanes led to the development of the National Disaster Housing Strategy. GAO was asked to assess (1) the challenges households faced in transitioning to permanent housing, (2) the extent to which FEMA measured its performance in closing and transitioning households in group sites, (3) the strategy's effectiveness in defining FEMA's roles and responsibilities for closing and transitioning households in group sites, and (4) the alternatives to travel trailers in group sites and how well the strategy assessed them. GAO reviewed the strategy and interviewed officials from FEMA, state agencies, and selected nonprofit and housing research groups. Households living in FEMA group sites encountered various challenges in transitioning to permanent housing. A significant challenge cited by several reports and officials GAO contacted was the availability of affordable rental housing. Other challenges that were cited included insufficient financing to fund repairs of homes, significantly higher insurance premiums, and the availability of full-time employment to support disaster victims' return to permanent housing. FEMA's overall effectiveness in measuring its performance in closing and transitioning households in group sites was limited because the agency's measures do not provide information on program results that would be helpful in gauging whether the program is achieving its goal. Previously, GAO reported that performance measures should be aligned with program goals and cover the activities that an entity is expected to perform to support the purpose of the program. However, FEMA's performance measures for Katrina and Rita group sites primarily describe program outputs and do not provide information on results, such as the timeliness or efficiency of closing group sites and transitioning households into permanent housing. Having such information could help identify potential problems in meeting goals and could be used to make decisions about resources needed and steps to be taken. The National Disaster Housing Strategy broadly defines FEMA's roles and responsibilities for closing group sites and assisting households with the transition into permanent housing. Although the strategy states that FEMA is responsible for closing group sites and assisting households find permanent housing, the strategy does not reflect the key characteristics of effective national strategies and plans that GAO identified in prior work. For example, the strategy does not explain how FEMA will work with other agencies in closing these sites and transitioning households into permanent housing. A lack of a detailed plan that includes information on the steps FEMA needs to take to assist households with transitioning into permanent housing could lead to delays in the future in helping disaster victims return to more stable and conventional living arrangements. Officials contacted and reports reviewed by GAO identified a number of housing options that could serve as alternatives to travel trailers in group sites--for example, providing rental assistance for existing housing and repairing damaged rental housing. However, FEMA's strategy does not assess alternatives, in part, because evaluations are ongoing. Also, it does not provide clear guidance on the specific temporary housing options that states can use instead of travel trailers while FEMA completes these evaluations. Without more specific information on what these temporary housing options are, including alternatives to travel trailers, state officials will not have the information needed to expedite the selection of temporary housing options. As a result, FEMA and the states may not be fully prepared to quickly respond to the temporary housing needs of those displaced by major disasters.
Following the 2000 national elections, we produced a comprehensive series of reports covering our nation’s election process that culminated with a capping report and framework for the Congress to use to enact reforms for election administration. Our reports were among the resources that the Congress drew on in enacting the Help America Vote Act (HAVA) of 2002, which provided guidance for fundamental election administration reform and created the Election Assistance Commission (EAC) to oversee the election administration reform process. HAVA also provided funding to replace older voting equipment, specifically punch card and mechanical lever voting equipment and encouraged adoption of other technology. Subsequently, jurisdictions have increased their use of electronic voting methods, of which there are two commonly- used types: optical scan and direct recording electronic (DRE). Enacted by Congress in October 2002, HAVA affects nearly every aspect of the voting process, from voting technology to provisional ballots and from voter registration to poll worker training. In particular, the act authorized $3.86 billion in funding over several fiscal years for programs to replace punch card and mechanical lever voting equipment, improve election administration and accessibility, train poll workers, and perform research and pilot studies. HAVA also established the EAC to assist in the administration of federal elections and provide assistance with the administration of certain federal election laws and programs. HAVA also established minimum election administration standards for the states and units of local government that are responsible for the administration of federal elections. The act specifically tasked the EAC to serve as a national clearinghouse and resource for compiling election information and reviewing election procedures; for example, it is to conduct periodic studies of election administration issues, including electronic voting system performance, to promote methods of voting and administration that are most convenient, accessible, and easy to use for all voters. Other examples of EAC responsibilities include developing and adopting voluntary voting system guidelines and maintaining information on the experiences of states in implementing the guidelines and operating voting systems; testing, certifying, decertifying, and recertifying voting system hardware and software through accredited laboratories; making payments to states to help them improve elections in the areas of voting systems standards, provisional voting and voting information requirements, and computerized statewide voter registration lists; and making grants for research on voting technology improvements. The act also established the Technical Guidelines Development Committee to support the EAC, making it responsible for recommending voluntary voting system guidelines to the EAC. The act assigned the National Institute of Standards and Technology (NIST) responsibility for providing technical support to the development committee and made the NIST Director the committee chair. The EAC began operations in January 2004, initially focusing on the distribution of funds to help states meet HAVA’s Title III requirements for uniform and nondiscriminatory election technology and administration, including the act’s requirements pertaining to voting system standards, provisional voting, voting information, a computerized statewide voter registration list, and identification for first-time voters who register to vote by mail. Actions EAC has taken since 2004 to improve voting systems include publishing the Best Practices Toolkit and specialized management guides to assist states and local jurisdictions with managing election-related activities and equipment; issuing voting system standards in 2005, referred to as the Voluntary Voting System Guidelines; establishing procedures for certifying voting systems; establishing a program for accreditation of independent testing laboratories, with support from NIST’s National Voluntary Laboratory Accreditation Program; disbursing to states approximately $2.3 billion in appropriations for the replacement of older voting equipment and election administration improvements under Title III of HAVA; and conducting national surveys of the 2004 general election, uniformed and overseas voters, and other studies. For fiscal year 2006, EAC’s appropriation totaled $14.1 million. EAC reported that this included $3.8 million (27 percent) for activities related to development and adoption of the voting system standards and the voting system certification program; $3.5 million (25 percent) for research and study and to establish the EAC as a national clearinghouse of election administration information; and $2.8 million (20 percent) to manage HAVA funds distributed to the states. The remaining funds went to various administrative expenses, including funding various advisory board meetings. EAC’s budget for fiscal year 2007 is $16.91 million, of which $4.95 million (29 percent) is to be transferred to NIST for its work on voting system standards and research performed under HAVA. EAC’s requested budget for fiscal year 2008 is $15.5 million, of which $3.25 million (21 percent) is to be transferred to NIST. In the United States today, most votes are cast and counted by one of two types of electronic voting systems: optical scan and direct recording electronic. For the November 2004 general election, optical scan was the predominant voting method for more than half of local jurisdictions nationwide. In contrast, DREs were used as the predominant voting method by an estimated 7 percent of jurisdictions, although they were the predominant voting method for large jurisdictions. Figure 1 shows the estimated use of predominant voting methods for small, medium, and large jurisdictions in the 2004 general election. Optical scan voting systems use electronic technology to tabulate paper ballots. For the 2004 general election, we estimated that about 51 percent of all local jurisdictions predominantly used optical scan voting equipment. An optical scan voting system is made up of computer-readable ballots, appropriate marking devices, privacy booths, and a computerized tabulation device. The ballot, which can be of various sizes, lists the names of the candidates and the issues. Voters record their choices using an appropriate writing instrument to fill in boxes or ovals or to complete an arrow next to the candidate’s name or the issue. The ballot includes a space for write-ins to be placed directly on the ballot. Optical scan ballots are tabulated by optical-mark-recognition equipment (see figure 2), which counts the ballots by sensing or reading the marks on the ballot. Ballots can be counted at the polling place—this is referred to as precinct-count optical scan—or at a central location. If ballots are counted at the polling place, voters or election officials put the ballots into the tabulation equipment, which tallies the votes; these tallies can be captured in removable storage media that are transported to a central tally location, or they can be electronically transmitted from the polling place to the central tally location. If ballots are centrally counted, voters drop ballots into sealed boxes and election officials transfer the sealed boxes to the central location after the polls close, where election officials run the ballots through the tabulation equipment. Software instructs the tabulation equipment to assign each vote (i.e., to assign valid marks on the ballot to the proper candidate or issue). In addition to identifying the particular contests and candidates, the software can be configured to capture, for example, straight party voting and vote-for-no-more-than-N contests. Precinct-based optical scanners can also be programmed to detect overvotes (where the voter votes for two candidates for one office, for example, invalidating the vote) and undervotes (where the voter does not vote for all contests or issues on the ballot) and to take some action in response (rejecting the ballot, for instance), so that voters can fix their mistakes before leaving the polling place. If ballots are tabulated centrally, voters do not have the opportunity to detect and correct mistakes that may have been made. In addition, optical scan systems often use vote-tally software to tally the vote totals from one or more vote tabulation devices. Optical scan systems were widely used as the predominant voting method for jurisdictions in the 2004 general election and we reported last year that jurisdictions planned to acquire more of these systems for the 2006 general election. We estimated that 30 percent of jurisdictions nationwide used precinct count optical scan voting equipment as their predominant voting method for the 2004 general election, while an estimated 21 percent used central count optical scan predominantly. Figure 1 shows the percentages of jurisdictions using optical scan equipment as the predominant voting method by jurisdiction size. While all sizes of jurisdictions had plans to acquire both precinct count and central count optical scan systems for the 2006 general election, small jurisdictions showed a strong preference for acquiring precinct count optical scan systems (estimated at 28 percent of small jurisdictions) compared with DREs (13 percent) and central count optical scan (4 percent). DREs capture votes electronically without the use of paper ballots. For the 2004 general election, we estimated that about 7 percent of all local jurisdictions used DREs predominantly, although 30 percent of all large jurisdictions used them as the predominant voting method. DREs come in two basic types: pushbutton or touchscreen, with pushbutton being the older technology. The two types vary considerably in appearance, as shown in figure 3. Pushbutton and touchscreen units differ significantly in the way they present ballots to the voter. With the pushbutton type, all ballot information is presented on a single “full-face” ballot. For example, a ballot may have 50 buttons on a 3 by 3 foot ballot, with a candidate or issue next to each button. In contrast, touchscreen DREs display ballot information on an electronic display screen. For both pushbutton and touchscreen types, the ballot information is programmed onto an electronic storage medium, which is then uploaded to the machine. For touchscreens, ballot information can be displayed in color and can incorporate pictures of the candidates. Because the ballot space on a touchscreen is much smaller than on a pushbutton machine, voters who use touchscreens must page through the ballot information. Both touchscreen and pushbutton DREs can accommodate multilingual ballots. Despite differences between pushbutton and touchscreen DREs, the two types have some similarities, such as how the voter interacts with the voting equipment. To make a ballot selection, voters press a button or the screen next to the candidate or issue, and the button or screen then lights up to indicate the selection. When voters are finished making their selections, they cast their votes by pressing a final “vote” button or screen. Until they hit this final button or screen, voters can change their selections. DREs are designed to not allow overvotes. Both types allow voters to write in candidates. While most DREs allow voters to type write-ins on a keyboard, some pushbutton types require voters to write the name on paper tape that is part of the device. In addition, different types of DREs offer a variety of options that jurisdictions may choose to purchase, such as printed receipts or audio interfaces for voters with disabilities. Although DREs do not receive paper ballots, they can retain permanent electronic images of all the ballots, which can be stored on various media, including internal hard disk drives, flash cards, or memory cartridges. According to vendors, these ballot images, which can be printed, can be used for auditing and recounts. Like optical scan devices, DREs require the use of software to program the various ballot styles and tally the votes, which is generally done through the use of memory cartridges or other media. Some of the newer DREs use smart card technology as a security feature. Smart cards are plastic devices—about the size of a credit card—that use integrated circuit chips to store and process data, much like a computer. Smart cards are generally used as a means to open polls and to authorize voter access to ballots. For instance, smart cards on some DREs store program data on the election and are used to help set up the equipment; during setup, election workers verify that the card received is for the proper election. Other DREs are programmed to automatically activate when the voter inserts a smart card; the card brings up the correct ballot onto the screen. DREs offer various configurations for tallying the votes. Some contain removable storage media that can be taken from the voting device and transported to a central location to be tallied. Others can be configured to electronically transmit the vote totals from the polling place to a central tally location. Vote tally software is often used to tally the vote totals from one or more units. DREs were chosen as the predominant voting method by a relatively small overall proportion of local jurisdictions for the 2004 general election (7 percent overall). However, as previously shown in figure 1, large and medium jurisdictions identified DREs as their predominant voting method (estimated at 30 percent and 20 percent of jurisdictions, respectively) more often than small jurisdictions (estimated at 1 percent). DREs were the leading choice among voting methods for both large and medium jurisdictions that planned to acquire voting systems before the 2006 general election (an estimated 34 percent of jurisdictions in both size groups). Voting systems are one facet of a multifaceted, continuous elections process that involves the interplay of people, processes, and technology. All levels of government—federal, state, and local— share responsibilities for aspects of elections and voting systems. Moreover, effective performance of these systems is a product of effective system life cycle management, which includes systems definition, development, acquisition, operations, testing, and management. Such performance can be viewed in terms of several characteristics, such as security, reliability, ease of use, and cost effectiveness. Voting systems represent one of many important components in the overall election process. This process involves all levels of government and is made up of several stages, with each stage consisting of the interplay of people, processes, and technology. At the federal level, Congress has authority under the Constitution to regulate the administration of presidential and congressional elections and to enforce prohibitions against specific discriminatory practices in all elections—federal, state, and local. It has passed legislation affecting the administration of state elections that addresses voter registration, absentee voting, accessibility provisions for the elderly and handicapped, and prohibitions against discriminatory practices. Congress does not have general constitutional authority over the administration of state and local elections. At the state level, the states are responsible for the administration of both their own elections and federal elections. States regulate the election process, including, for example, adoption of voting system standards, testing of voting systems, ballot access, registration procedures, absentee voting requirements, establishment of voting locations, provision of Election Day workers, and counting and certification of the vote. As we have reported, the U.S. election process can be seen as an assemblage of 51 somewhat distinct election systems—those of the 50 states and of the District of Columbia. Further, although election policy and procedures are legislated primarily at the state level, states typically have decentralized this process so that the details of administering elections are carried out at the city or county levels, and voting is done at the local level. This is important because local election jurisdictions number more than 10,000 and their size varies enormously—from a rural county with about 200 voters to a large urban county such as Los Angeles County, where the total number of registered voters for the 2000 elections exceeded the registered voter totals in 41 states. The size and demographics of a voting jurisdiction significantly affects the complexity of planning and conducting the election, as does the method used to cast and count votes. For example, jurisdictions using DRE systems may need to manage the electronic transmission of votes or vote counts, while jurisdictions using optical scan technology need to manage the transfer of the paper ballots this technology reads and tabulates. Jurisdictions using optical scan technology may also need to manage electronic transmissions if votes are counted at various locations and totals are electronically transmitted to a central tally point. No matter what technology is used, jurisdictions may need to provide ballot translations; however, the logistics of printing paper materials in a range of languages, as would be required for optical scan technology, is different from the logistics of programming translations into DRE units. Some states do have statewide election systems so that every voting jurisdiction uses similar processes and equipment, but others do not. For instance, we reported in 2001 that in Pennsylvania, local election officials told us that there were 67 counties and consequently 67 different ways of handling elections. In some states, such as Georgia, state law prescribes the use of common voting technology throughout the state while in other states, local election officials generally choose the voting technology to be used in their precincts, often from a list of state-certified options. Regardless of levels of government, however, election administration is a year-round activity, involving varying sets of people performing the activities of each stage of the election process. These stages generally consist of the following: Voter registration. Among other things, local election officials register eligible voters and maintain voter registration lists, including updates to registrants’ information and deletions of the names of registrants who are no longer eligible to vote. Absentee and early voting. This type of voting allows eligible persons to vote in person or by mail before Election Day. Election officials must design ballots and other systems to permit this type of voting and educate voters on how to vote by these methods. Election Day vote casting. Election administration includes preparation before Election Day, such as local election officials arranging for polling places, recruiting and training poll workers, designing ballots, and preparing and testing voting equipment for use in casting and tabulating votes, as well as Election Day activities, such as opening and closing polling places and assisting voters in casting their votes. Vote counting. At this stage, election officials tabulate the cast ballots, determine whether and how to count ballots that cannot be read by the vote counting equipment, certify the final vote counts, and perform recounts, if required. As shown in figure 4, each stage of an election involves people, processes, and technology. Electronic voting systems are primarily involved in the last three stages, during which votes are recorded, cast, and counted. However, the type of system that a jurisdiction uses may affect earlier stages. For example, in a jurisdiction that uses optical scan systems, paper ballots like those used on Election Day may be mailed in the absentee voting stage. On the other hand, a jurisdiction that uses DRE technology would have to make a different provision for absentee voting. The performance of any information technology system, including electronic voting systems, is heavily influenced by a number of factors, including how well the system is defined, developed, acquired, tested, and implemented. Like any information technology product, a voting system starts with the explicit definition of what the system is to do and how well it is to do it. These requirements are then translated into design specifications that are used to develop the system. Electronic voting systems are typically developed by vendors and then purchased as commercial off-the-shelf (COTS) products and implemented by state and local election administrators. During the development, acquisition, and implementation of the systems, a range of tests are performed and the process is managed to ensure performance expectations are met. Together, these activities form a voting system life cycle (see figure 5). Unless voting systems are properly managed throughout their life cycle, this one facet of the election process can significantly undermine the integrity of the whole. Standards. Voting system standards define the functional and performance requirements that must be met and thus provide the baseline against which systems can be developed and tested. They also specify how the systems should be implemented and operated. Voting system standards apply to system hardware, software, firmware, and documentation, and they span prevoting,voting, and postvoting activities. They address, for example, requirements relating to system security; system reliability (accuracy and availability); system auditability; system storage and maintenance; and data retention and transportation. In addition to national standards, some states and local jurisdictions have specified their own voting system requirements. Development. Product development is performed by the voting system vendor. Product development includes further establishing system requirements, designing the system architecture, developing software, integrating hardware and software components, and testing the system. Acquisition. Voting system acquisition activities are performed by state and local governments and include publishing a request for proposal, evaluating proposals, choosing a voting system method, choosing a vendor, writing and administering contracts, and testing the acquired system. Operations. Operation of voting systems is typically the responsibility of local jurisdictions. These activities include setting up systems before voting, vote capture and counting during elections, recounts and system audits after elections, and storage of systems between elections. Among other things, this phase includes activities associated with the physical environments in which the system operates. These include ensuring the physical security of the polling place and voting equipment and controlling the chain of custody for voting system components and supplies. The operations phase also includes monitoring of the election process by use of system audit logs and backups, and the collection, analysis, reporting, and resolution of election problems. Testing. As noted, testing is conducted by multiple entities throughout the life cycle of a voting system. Voting system vendors conduct testing during system development. National testing of systems is conducted by accredited independent testing authorities. Some states conduct testing before acquiring a system to determine how well it meets the specified performance parameters, or states may conduct certification testing to ensure that a system performs as specified by applicable laws and requirements. Once a voting system is delivered by the vendor, states and local jurisdictions may conduct acceptance testing to ensure that the system satisfies requirements. Finally, local jurisdictions typically conduct logic and accuracy tests prior to each election and sometimes subject portions of the system to parallel testing during each election. Management. Management processes ensure that each life cycle phase produces desirable outcomes and are conducted by the organizations responsible for each life cycle phase. Voting system vendors manage the development phase, while states and/or local jurisdictions manage the acquisition and operations phases. Typical management activities that span the system life cycle include planning, configuration management, system performance review and evaluation, problem tracking and correction, human capital management, and user training. Management responsibilities related to security and reliability include program planning, disaster recovery and contingency planning, definition of security roles and responsibilities, configuration management of voting system hardware and software, and poll worker security training. Although the debate concerning electronic voting systems is primarily focused on security, other performance attributes are also relevant, such as reliability and ease of use, and cost. Each of these attributes is described here. Security. Election officials are responsible for establishing and managing security and privacy controls to protect against threats to the integrity of elections. Threats to election results and voter confidentiality include potential modification or loss of electronic voting data; loss, theft, or modification of physical ballots; and unauthorized access to software and electronic equipment. Different types of controls can be used to counter these threats. Physical access controls are important for securing voting equipment, vote tabulation equipment, and ballots. Software access controls (such as passwords and firewalls) are important for limiting the number of people who can access and operate voting devices, election management software, and vote tabulation software. In addition, physical screens around voting stations, and poll workers preventing voters from being watched or coerced while voting are important to protect the privacy and confidentiality of the vote. Reliability. Ensuring the reliability of votes being recorded and tallied is an essential attribute of any voting equipment and depends to a large degree on the accuracy and availability of voting systems. Without such assurance, both voter confidence in the election and the integrity and legitimacy of the outcome of the election are at risk. The importance of an accurate vote count increases with the closeness of the election. Both optical scan and DRE systems are claimed to be highly accurate. Although voting equipment may be designed and developed to count votes as recorded with 100 percent accuracy, how well the equipment counts votes as intended by voters is a function not only of equipment design, but also of how procedures are followed by election officials, technicians, and voters. It is also important to limit system down time so that polling places can handle the volume of voter traffic. Ease of Use. Ease of use (or user friendliness) depends largely on how voters interact with the voting system, physically and intellectually. This interaction, commonly referred to as the human/machine interface, is a function of the system design and how it has been implemented. Ease of use depends on how well jurisdictions design ballots and educate voters on the use of the equipment. A voting system’s ease of use affects accuracy (i.e., whether the voter’s intent is captured), and it can also affect the efficiency of the voting process (confused voters take longer to vote). Accessibility by diverse types of voters, including those with disabilities, is a further aspect of ease of use. Cost. For a given jurisdiction, the particular cost associated with an electronic voting system will depend on the requirements of the jurisdiction as well as the particular equipment chosen. Voting equipment costs vary among types of voting equipment and among different manufacturers and models of the same type of equipment. Some of these differences can be attributed to differences in what is included in the unit cost. In addition to the equipment unit cost, an additional cost for jurisdictions is the software that operates the equipment, prepares the ballots, and tallies the votes (and in some cases, prepares the election results reports). Other factors affecting the acquisition cost of voting equipment are the number and types of peripherals required. Once jurisdictions acquire the voting equipment, they also incur the cost to operate and maintain it, which can vary considerably. Election officials, computer security experts, citizen advocacy groups, and others have raised significant security and reliability concerns with electronic voting systems, citing, for example, inadequacies in standards, system design and development, operation and management activities, and testing. In 2005, we examined the range of concerns raised by these groups and aligned them with their relevant life cycle phases. We also examined EAC’s efforts related to these concerns. Furthermore, we identified key practices that each level of government should implement throughout the voting system life cycle in order to improve security and reliability. The aspects of the voting system life cycle phases are interdependent—that is, a problem experienced in one area of the life cycle will likely affect other areas. For example, a weakness in system standards could result in a poorly designed and developed system, which may not perform properly in the operational phase. State and local jurisdictions have documented instances when their electronic voting systems exhibited operational problems during elections. Such failures led to polling place delays, disruptions, and incorrect vote tabulations. In reviewing the reported concerns, we have explained that many of the security and reliability concerns involved vulnerabilities or problems with specific voting system makes and models or unique circumstances in a specific jurisdiction’s election, and there is a lack of consensus among elections officials, computer security experts, and others on the pervasiveness of the concerns. We concluded in 2005 that these concerns have caused problems with recent elections, resulting in the loss and miscount of votes. In light of the demonstrated voting system problems; the differing views on how widespread these problems are; and the complexity of assuring the accuracy, integrity, confidentiality, and availability of voting systems throughout their life cycles, we stated that the security and reliability concerns merit the focused attention of federal, state, and local authorities responsible for election administration. Appropriately defined and implemented standards for system functions and testing processes are essential to ensuring the security and reliability of voting systems across all phases of the elections process. States and local jurisdictions face the challenge of adapting to and consistently applying appropriate standards and guidance to address vulnerabilities and risks in their specific election environments. The national standards are voluntary— meaning that states are free to adopt them in whole or in part or reject them entirely. The Federal Election Commission (FEC) issued a set of voluntary voting systems standards in 1990 and revised them in 2002. These standards identify requirements for electronic voting systems. Computer security experts and others criticized the 2002 voting system standards for not containing requirements sufficient to ensure secure and reliable voting systems. Common concerns with the standards involved their vague and incomplete security provisions, inadequate provisions for some commercial products and networks, and inadequate documentation requirements. In December 2005, EAC issued the Voluntary Voting System Guidelines, which include additions and revisions for system functional requirements, performance characteristics, documentation requirements, and test evaluation criteria for the national certification of voting systems. These guidelines promote security measures that address gaps in prior standards and are applicable to more modern technologies, such as controls for software distribution and wireless operations. As we previously reported, the 2005 Voluntary Voting System Guidelines do not take effect until December 2007. Moreover, this version of the standards does not comprehensively address voting technology issues. For instance, they do not address COTS devices (such as card readers, printers, or personal computers) or software products (such as operating systems or database management systems) that are used in voting systems without modification. This is significant because computer security experts have raised concerns about a provision in the prior voting system standards that exempted unaltered COTS software from testing and about voting system standards that are not sufficient to address the weaknesses inherent in telecommunications and networking services. Specifically, vendors often use COTS software in their electronic voting systems, including operating systems. Security experts note that COTS software could contain defects, vulnerabilities, and other weaknesses that could be carried over into electronic voting systems, thereby compromising their security. Regarding telecommunications and networking services, selected computer security experts believe that relying on any use of telecommunications or networking services, including wireless communications, exposes electronic voting systems to risks that make it difficult to adequately ensure their security and reliability— even with safeguards such as encryption and digital signatures in place. As states and jurisdictions move to a more integrated suite of election systems, proactive efforts to establish standards in such areas will be essential to addressing emerging technical, security, and reliability interactions among systems and managing risks in this dynamic election environment. However, the 2005 guidelines do not address the emerging trends in election systems, such as the integration of registration systems with voting systems. In light of this and other weaknesses in the standards, we reported in 2005 that EAC did not yet have detailed plans in place for addressing these deficiencies. Accordingly, we recommended that EAC collaborate with NIST and the Technical Guidelines Development Committee to define specific tasks, measurable outcomes, milestones, and resource needs required to improve the standards. To its credit, EAC agreed with our recommendation, recognizing that more work was needed to further develop the technical guidelines. Accordingly, it stated that it planned to work with NIST to plan and prioritize its standards work within its scarce resources. Multiple reports, including several state-commissioned technical reviews and security assessments, raised concerns about the design and development of secure and reliable electronic voting systems. Among other things, weak embedded security controls and audit trail design flaws were two major areas of concern. Weak system security controls. Some electronic voting systems reportedly have weak software and hardware security controls. Regarding software controls, many security examinations reported flaws in how controls were implemented in some DRE systems to prevent unauthorized access. For example, one model failed to password-protect the supervisor functions controlling key system capabilities; another relied on an easily guessed password to access these functions. If exploited, these weaknesses could damage the integrity of ballots, votes, and voting system software by allowing unauthorized modifications. Regarding physical hardware controls, several recent reports found that certain DRE models contained weaknesses in controls designed to protect the system. For instance, reviewers were concerned that a particular model of DRE was set up in such a way that if one machine was accidentally or intentionally unplugged from the others, voting functions on the other machines in the network would be disrupted. In addition, reviewers found that the switches used to turn a DRE system on or off, as well as those used to close the polls on a particular DRE terminal, were not protected. Design flaws in developing voter-verified paper audit trails. Establishing a voter-verified paper audit trail involves adding a paper printout to a DRE system so that a voter can review and verify his or her ballot. Some citizen advocacy groups, security experts, and elections officials advocate these audit trails as a protection against potential DRE flaws. However, other election officials and researchers have raised concerns about potential reliability and security flaws in the design of systems using voter-verified paper audit trails. If voting system mechanisms for protecting the paper audit trail were inadequate, an insider could associate voters with their individual paper ballots and votes, particularly if the system stored voter-verified ballots sequentially on a continuous roll of paper. If not protected, such information could breach voter privacy and confidentiality. Several reports raised concerns about the operational practices of local jurisdictions and the actual performance of their respective electronic voting systems during elections. These include incorrect system configurations, inadequate security management programs, and poor implementation of security procedures. Incorrect system configuration. Some state and local election reviews have documented cases in which local governments did not properly configure their voting systems. These improper configurations resulted in voters being unable to vote in certain races or their votes not being captured correctly by the voting system. Poor version control of system software. Security experts and some election officials expressed concern that the voting system software installed at the local level may not be the same as what was qualified and certified at the national or state levels. These groups raised the possibilities that either intentionally or by accident, voting system software could be altered or substituted, or that vendors or local officials might install untested or uncertified versions of voting systems, knowingly or unknowingly. As a result, potentially unreliable or malicious software might be used in elections. Inadequate security management programs. Several technical reviews found that states did not have effective information security management plans in place to oversee their electronic voting systems. The reports noted that key managerial functions were not in place, including (1) providing appropriate security training, (2) ensuring that employees and contractors had proper certifications, (3) ensuring that security roles were well defined and staffed, and (4) ensuring that pertinent officials correctly set up their voting system audit logs and require them to be reviewed. Poor implementation of security procedures. Several reports indicated that state and local officials did not always follow security procedures. For example, reports found that a regional vote tabulation computer was connected to the Internet and that local officials had not updated it with several security patches, thus needlessly exposing the system to security threats. In addition, several reports indicated that some state and local jurisdictions did not always have procedures in place to detect problems with their electronic voting systems such as ensuring the number of votes cast matched the number of signatures on precinct sign-in sheets. Security experts and some election officials have expressed concerns that the tests performed by independent testing authorities and state and local election officials do not adequately assess electronic voting systems’ security and reliability. These concerns are intensified by what some perceive as a lack of transparency in the testing process. Inadequate security testing. Many computer security experts expressed concerns with weak or insufficient system testing, source code reviews, and penetration testing. To illustrate their concerns, they pointed to the fact that most of the systems that exhibited the weak security controls previously cited had been nationally certified after testing by an independent testing authority. Security experts and others point to this as an indication that both the standards and the testing program are not rigorous enough with respect to security. Lack of transparency in the testing process. Security experts and some elections officials have raised concerns about a lack of transparency in the testing process. They note that the test plans used by the independent testing authorities, along with the test results, are treated as protected trade secrets and thus cannot be released to the public. Critics say that this lack of transparency hinders oversight and auditing of the testing process. This in turn makes it harder to determine the actual capabilities, potential vulnerabilities, and performance problems of a given system. Despite assertions by election officials and vendors that disclosing too much information about an electronic voting system could pose a security risk, one security expert noted that a system should be secure enough to resist even a knowledgeable attacker. In 2006, we reported on state and local government experiences in conducting the 2004 national election. Regarding voting systems, states and jurisdictions’ responses to our surveys showed that differing versions of the national voting system standards were used (not always the most current version); voting system life cycle management practices were not consistently implemented; and certain types of system testing were not widely performed. Moreover, jurisdictions reported that they did not consistently monitor the performance of their systems, which is important for determining whether election needs, requirements, and expectations are met and for taking corrective actions when they are not. States and jurisdictions reported that they applied a variety of voting system standards, some of which were no longer current. Specifically, 44 states and the District of Columbia reported that they were requiring local jurisdictions’ voting systems to be used for the first time in the November 2006 general election to comply with the national voting system standards. However, these states were not all using the same version of the standards. This is troublesome because the later versions of the standards are more stringent than the earlier versions in various areas, including security. More specifically, 28 of the 44 states and the District of Columbia reported that voting systems to be used for the first time in the 2006 election comply with the 2002 voting system standards. Nine of these 28 states would also require their jurisdictions to apply the 1990 federal standards to new voting systems and 4 of the 28 would also require jurisdictions to use the 2005 voting system standards, which were in draft version at the time of our survey. (One other state also expected to apply the 2005 voting system standards.) Ten other of the 44 states reporting said that they expected to use hybrid standards that were based on one or more versions of the national standards, without specifying the composition of their hybrid, and 4 states planned to use the national standards in 2006, but did not specify a version. (Five states responded that they did not require their voting systems to comply with any version of the national standards or had not yet made a decision on compliance with the standards for 2006. One state did not respond.) Local jurisdictions varied widely in the nature and extent of their voting system security efforts and activities during the 2004 election. Our research on recommended security practices shows that effective system security management involves having, among other things, (1) defined policies governing such system controls as authorized functions and access and documented procedures for secure normal operations and incident management; (2) documented plans for implementing policies and procedures; (3) clearly assigned roles and responsibilities for system security; and (4) verified use of technical and procedural controls designed to reduce the risk of disruption, destruction, or unauthorized modification of systems and their information. Jurisdictions’ efforts in each of these areas for the November 2004 general election are discussed here. Policies and procedures. Many jurisdictions reported having written policies and procedures for certain aspects of security related to their voting systems, but others did not. Written security policies were more prevalent among large jurisdictions (an estimated 65 percent) than small jurisdictions (an estimated 41 percent). An estimated one-fifth of jurisdictions reported that they did not have written policies and procedures in place for transporting ballots or electronic memory, storing ballots, or electronic transmission of voted ballots to ensure ballot security. In addition, some jurisdictions that we visited had published detailed voting system security policies and procedures that included such topics as network security policies for election tabulation, procedures for securing and protecting election equipment and software, testing voting equipment to ensure accurate recording of votes, and disaster recovery plans, while others omitted these topics. Some jurisdictions also took additional steps to ensure that election workers had access to, and were trained in, the contents of the policies and procedures for securing ballots and voting equipment. Implementation plans. Election officials in only 8 of the 28 jurisdictions that we visited told us that they had written plans for implementing security aspects of their voting systems and processes. Moreover, the contents of plans we obtained from local jurisdictions varied widely. One of the jurisdiction’s security plans covered most aspects of the voting process, from ballot preparation through recount, while another plan was limited to the security of its vote-tallying system in a stand-alone environment. Of the 5 plans we reviewed, 2 covered almost all of the 8 security topics in our review that included risk assessment, physical controls, awareness training, and incident response, while the others covered just one or two topics. Roles and responsibilities. In addition, security management roles and responsibilities for the 2004 general election were not consistently assigned. According to survey responses, security responsibilities primarily fell to local election officials (estimated at 67 percent) for the 2004 general election, although state officials (estimated at 14 percent) and other entities (e.g., independent consultants and vendors, estimated at 24 percent) were also assigned these responsibilities. Local officials were typically responsible for implementing security controls, while state officials were usually involved with developing security policy and guidance and monitoring local jurisdictions’ implementation of security. Some jurisdictions reported that other entities performed such tasks as securing voting equipment during transport or storage and training election personnel for security awareness. Similarly, 26 states reported that security monitoring and evaluation was performed by two or more entities. In 22 states and the District of Columbia, responsibility for security monitoring and evaluation was shared between the state and local election officials. States also reported cases where other entities (e.g., independent consultants or vendors) were involved in monitoring and evaluating controls. The entities that were assigned tasks and responsibilities at the local jurisdictions we visited are described in table 1. Use of security controls. For the November 2004 general election, jurisdictions’ operation of voting systems employed varying uses of certain security controls. Based on survey responses, we estimated that 59 percent of jurisdictions used power or battery backup, 67 percent used system access controls, 91 percent used hardware locks and seals, and 52 percent used backup electronic storage for votes. We further estimated that 95 percent of jurisdictions used at least one of these controls, and we estimated hardware locks and seals were the controls most consistently used for electronic voting systems. Furthermore, we estimated that a lower percentage of small jurisdictions used power or battery backup and electronic backup storage of votes for their voting systems than large or medium jurisdictions, and these differences are statistically significant in most cases. Figure 7 presents the use of various security controls by jurisdiction size. Among the jurisdictions that we visited, election officials reported that various security measures were in use during the 2004 general election to safeguard voting equipment, ballots, and votes before, during, and after the election. However, the measures were not uniformly reported by officials in these jurisdictions, and officials in most jurisdictions reported that they did not have a security plan to govern the scope, nature, and implementation of these measures or other aspects of their security program. The security controls most frequently cited by officials for the jurisdictions that we visited were locked storage of voting equipment and ballots and monitoring of voting equipment. Other security measures mentioned during our visits included testing voting equipment before, during, or after the election to ensure that the equipment was accurately tallying votes; planning and conducting training on security issues and procedures for elections personnel; and video surveillance of stored ballots and voting equipment. Table 2 summarizes the types and frequency of security measures reported by election officials in the jurisdictions we visited. Voting systems that can be remotely accessed introduce additional security challenges. Based on survey responses, we estimated that a small percentage of local jurisdictions (10 percent) provided remote access to their voting systems for one or more categories of personnel—local election officials, state election officials, vendors, or other parties. Some of the jurisdictions that provided this access described a variety of protections to mitigate the risk of unauthorized remote access, including locally controlled passwords, passwords that change for each access, and local control of communications connections. However, the percentage of jurisdictions with remote access may actually be higher because 7 to 8 percent of jurisdictions did not know if remote access was available for their systems. To ensure that voting systems perform as intended, the systems must be effectively tested. Voting system test and evaluation can be grouped into various types, or stages: certification testing (national level), certification testing (state level), acceptance testing, readiness testing, parallel testing, and postelection voting system audits. Each of these tests has a specific purpose and is conducted at the national, state, or local level at a particular time in the election cycle. Table 3 summarizes these types of tests. For the November 2004 general election, voting system testing was conducted for almost all voting systems, but the types and content of the testing performed varied considerably. According to survey responses, most states and local jurisdictions employed national and state certification testing and readiness testing to some extent, but the criteria used in this testing were highly dependent on the state or jurisdiction. Also, many, but not all, states and jurisdictions conducted acceptance testing of both newly acquired systems and those undergoing changes or upgrades. In contrast, relatively few states and jurisdictions conducted parallel testing during elections or audits of voting systems following elections. State and local responses to our surveys are summarized here relative to each type of testing. National certification. Most states continued to require that voting systems be nationally tested and certified. For voting systems being used for the first time in the 2004 general election, national certification testing was almost always uniformly required. In particular, 26 of 27 states using DRE for the first time in this election, as well as the District of Columbia, required their systems to be nationally certified, while 9 of the 10 states using punch card equipment for the first time, and 30 of 35 states and the District of Columbia using optical scan equipment for the first time, reported such requirements. However, for the 2004 general election, we estimated that 68 percent of jurisdictions did not know whether their respective systems were nationally certified. This uncertainty surrounding the certification status of a specific version of voting system at the local level underscores our concern that even though voting system software may have been qualified and certified at the national or state levels, software changes and upgrades performed at the local level may not be. State certification. For the November 2004 general election, 42 states and the District of Columbia reported that they required state certification of voting systems. Seven of these states purchased voting systems at the state level for local jurisdictions. Officials for the remaining states and the District of Columbia reported that responsibility for purchasing a state-certified voting system rested with the local jurisdiction. While state certification requirements often included national testing as well as confirmation of functionality for particular ballot conditions, some states also required additional features such as construction quality, transportation safety, and documentation. Among the remaining 8 states that did not require state certification, officials described other mechanisms to address the compliance of voting equipment with state-specific requirements, such as a state approval process or acceptance of voting equipment based on federal certification. For the 2006 general election, 44 states reported that they would have requirements for certification of voting systems, 2 more states than for the 2004 general election. Of the 44, all but 1 expected to conduct the certification themselves; the remaining state reported that it would rely on results from a national independent testing authority to make its certification decision. In addition, 18 of the 43 states planned to involve a national testing laboratory in their certification process. Acceptance testing. With regard to acceptance testing of new voting systems, 26 states and the District of Columbia reported that responsibility for such testing was assigned to either the state or local level for the 2004 general election. Specifically, 8 states and the District of Columbia reported that they had responsibility for performing acceptance testing, 15 states required local jurisdictions to perform such testing, and 3 states reported that requirements for acceptance testing existed at both the state and local levels. Twenty- two states either did not require such testing or did not believe that such testing was applicable to them. (Two states did not know their acceptance testing requirements for the 2004 election.) In addition, more states required that acceptance testing be performed for changes and upgrades to existing systems than they did for new systems—30 states in all and the District of Columbia. Similarly, election officials at a majority of the local jurisdictions that we visited told us that they conducted some type of acceptance testing for newly acquired voting equipment, although they described a variety of approaches to performing acceptance testing. For example, the data used for testing could be vendor-supplied, developed by election officials, or both, and could include system initialization, logic and accuracy, and tamper resistance. Other steps, such as diagnostic tests, physical inspection of hardware, and software configuration checks, were also mentioned as testing activities by local election officials. Further, election officials from 3 jurisdictions that we visited said that vendors were heavily involved in designing and executing the acceptance tests, while officials from another jurisdiction that we visited said that vendors contributed to a portion of their testing. In 2 jurisdictions, officials said that acceptance tests were conducted at a university center for elections systems. Readiness testing. Almost all states (49) and the District of Columbia reported that they performed readiness testing of voting systems at the state level, the local level, or both (one state did not require readiness testing). Most states (37) required local jurisdictions to perform readiness testing. However, 7 states reported that they performed their own readiness testing for the 2004 general election in addition to local testing. Five states and the District of Columbia reported that they had no requirements for local jurisdictions to perform readiness testing but conducted this testing themselves. State laws or regulations in effect for the 2004 election typically had specific requirements for when readiness testing should be conducted and who was responsible for testing, sometimes including public demonstrations of voting system operations. We found that most jurisdictions conducted readiness testing, also known as logic and accuracy testing, for both the 2000 and 2004 general elections. Election officials in all of the local jurisdictions we visited following the 2004 election reported that they conducted readiness testing on their voting equipment using one or more approaches, such as diagnostic tests, integration tests, mock elections, and sets of test votes, or a combination of approaches. Security testing. Security testing was reportedly performed by 17 states and the District of Columbia for the voting systems used in the 2004 general election, and 7 other states reported that they required local jurisdictions to conduct such testing. The remaining 22 states said that they did not conduct or require system security testing. (Three states reported that security testing was not applicable for their voting systems.) Moreover, we estimated that at least 19 percent of local jurisdictions nationwide (excluding jurisdictions that reported that they used paper ballots) did not conduct security testing for the systems they used in the November 2004 election. Although jurisdiction size was not a factor in whether security testing was performed, the percentage of jurisdictions performing security testing was notably higher when the predominant voting method was DRE (63 percent) and lower for jurisdictions where the predominant method was precinct count optical scan (45 percent). Parallel testing. Parallel testing was not widely performed by local jurisdictions in the 2004 general election, although 7 states reported that they performed parallel testing of voting systems on Election Day, and another 6 states reported that this testing was required by local jurisdictions. We estimated that 2 percent of jurisdictions using electronic systems for at least some of their voting conducted parallel testing for the 2004 general election. Large and medium jurisdictions primarily performed this type of testing (7 percent and 4 percent of jurisdictions, respectively). The percentage of small jurisdictions performing this type of testing was negligible (0 percent). Election officials in 2 of the 28 jurisdictions that we visited told us that they performed parallel testing, either at the state level or at the local jurisdiction. In both cases, the tests were conducted on voting equipment for which security concerns had been raised in another state’s voting equipment test report. Local officials who told us that parallel testing was not performed on their voting systems attributed this to the absence of parallel testing requirements, a lack of sufficient voting equipment to perform these tests, or the unnecessary nature of parallel testing because of the stand-alone operation of their systems. Post-election audits. Less than one-half of the states (22) and the District of Columbia reported that they performed postelection voting system audits for the 2004 general election. Specifically, 4 states and the District of Columbia reported that they conducted postelection audits of voting systems, 16 states required that audits of voting systems be conducted by local jurisdictions, and 2 states reported that audits of voting systems were performed at both the state and local levels. Moreover, state laws or regulations in effect for the 2004 general election varied in when and how these audits were to be conducted. We estimated that 43 percent of jurisdictions that used voting systems for at least some of their voting conducted postelection voting system audits. This practice was much more prevalent at large and medium jurisdictions (62 percent and 55 percent, respectively) than small jurisdictions (34 percent). We further estimated that these voting system audits were conducted more frequently in jurisdictions with central count optical scan voting methods (54 percent) than they were in jurisdictions with precinct count optical scan voting methods (35 percent). It is important that performance be measured during system operation. As we reported in 2001 and 2006, measuring how well voting systems perform during a given election allows local officials to better position themselves for ensuring that elections are conducted properly. Such measurement also provides the basis for knowing where performance needs, requirements, and expectations are not being met so that timely corrective action can be taken to ensure the security and reliability of the voting system. Jurisdictions without supporting measures for security and reliability may lack sufficient insight into their system operations. Overall, responses to our local jurisdiction survey show that large jurisdictions were most likely to record voting system performance and small jurisdictions were least likely. We estimated that 42 percent of jurisdictions overall monitored the accuracy of voting equipment in the 2004 general election. Other measures recorded were spoiled ballots (estimated at 50 percent of jurisdictions), undervotes (50 percent of jurisdictions), and overvotes (49 percent of jurisdictions). During our visits to local jurisdictions, election officials in several jurisdictions told us that measuring overvotes was not a relevant performance indicator for jurisdictions using DREs because they do not permit overvoting, and that undervotes were not a meaningful metric because most voters focused on a limited range of issues or candidates and thus frequently chose not to vote on all contests. Figure 8 shows the percentages of small, medium and large jurisdictions that collected information on various measures of accuracy. We estimated that 15 percent of jurisdictions measured voting system failure rates and 11 percent measured system downtime. A higher percentage of large and medium jurisdictions collected these performance data than small jurisdictions. Collection of these data was also related to the predominant voting method used by a jurisdiction, with jurisdictions that predominantly used DREs more likely to collect system data than those that used precinct count or central count optical scan voting methods (an estimated 45 percent of jurisdictions versus 23 percent or 10 percent, respectively). Figure 9 shows the percentages of small, medium, and large jurisdictions that collected information on voting equipment failures and downtime. Figure 10 shows the percentages by predominant voting method of all jurisdictions that collected data on equipment failures. Further, an estimated 55 percent of all jurisdictions kept a written record of issues and problems that occurred on Election Day, which could be a potential source of performance data. Large jurisdictions were more likely to keep a written record of issues or problems that occurred on Election Day. Specifically, we estimated that 79 percent of large jurisdictions kept such records, compared with 59 percent of medium jurisdictions and 52 percent of small jurisdictions. The responsibilities for monitoring or reporting voting system performance most often rested with local jurisdictions. We estimated that 83 percent of local jurisdictions had local officials responsible for performance monitoring or reporting, while states or other organizations (such as independent consultants or vendors) held such responsibilities in 11 percent and 13 percent of jurisdictions, respectively. The challenges in ensuring that voting systems perform securely and reliably are not unlike those faced by any technology user— application of well-defined standards for system capabilities and performance; effective integration of the people, processes, and technology throughout the voting system life cycle; rigorous and disciplined performance of security and testing activities; objective measurement to determine whether the systems are performing as intended; and analytical and economically justified basis for making informed decisions about voting system investment options. These challenges are complicated by other conditions common to both the national elections community and other information technology environments: the distribution of responsibilities among various organizations, technology changes, funding opportunities and constraints, changing requirements and standards, and public attention. Although responsibility for voting system performance falls largely on local governmental units, state and federal governments have important roles to play as well. Therefore, all levels of government need to work together to address these challenges, under the leadership of the EAC. To assist the EAC in executing its leadership role, we previously made recommendations to the commission aimed at better planning its ongoing and future activities relative to, for example, system standards and information sharing. While the EAC agreed with the recommendations, it told us that its ability to effectively execute its role is resource constrained. The extent to which states and local jurisdictions adopt and consistently apply up-to-date voting systems standards directly affects the security and reliability of voting systems during elections. For the 2006 general election, a substantial proportion of states and jurisdictions had yet to adopt the most current federal voting system standards or related performance measures, meaning that the systems they employ may not perform as securely and reliably as desired. Beyond this, decisions by states and local jurisdictions to apply these latest standards for the 2008 election present additional challenges such as (1) whether the systems can be tested and certified in time for the election and (2) adopting standards that are now undergoing revision rather than continued use of earlier standards or later adoption of even newer standards. EAC plays an important role in ensuring the timely testing and certification of voting systems against the latest standards and in informing state and local decisions on whether to adopt these standards for the 2008 election. Accordingly, we have recommended that EAC define tasks and time frames for achieving the full operational capability of the national voting system certification program. These management elements would need to take into account estimating testing capacity and expected volume for the testing laboratory accreditation program, establishing protocols and time frames for reviewing certification packages, and setting norms for timely consideration and decision making regarding system certifications. Sharing this information with state and local election officials would help them to plan for system upgrades, testing, and state certification to meet their upcoming election cycles. States and local jurisdictions must also consider the timely adoption of standards in light of the additional work that is currently under way and planned to address known weaknesses in the national standards. For example, in addition to establishing minimum functional and performance requirements for voting systems, standards can also be used to govern integration of election systems, such as the accuracy, reliability, privacy, and security of components and interfaces. Accordingly, we have recommended that the EAC collaborate with NIST and the Technical Guidelines Development Committee to define the specific tasks, measurable outcomes, milestones, and resource needs required to improve the voting system standards. Identifying the incremental improvements to standards for several future election cycles and coordinating these with states and local jurisdictions would help those officials plan for these cycles and prepare the public for expected changes in voting technologies, security and reliability features, and compensating controls. Maximizing the performance of the voting systems that jurisdictions currently have and those they plan to use in the next general election means taking proactive steps between now and November 2008 to ensure that these systems perform as intended. These steps include activities aimed at securing, testing, and preparing these systems for operation. Although the vast majority of jurisdictions performed security, testing, and operational activities in one form or another for the 2004 general election, the extent and nature of these activities varied among jurisdictions and depended on the availability of resources (financial and human capital) committed to them. The challenge facing all voting jurisdictions will be to ensure that these activities are fully and properly performed, particularly in light of the security and reliability concerns that have been reported with electronic voting systems. Security, testing, and operational activities are to a large degree responsive to—and limited by—formal state and local directives. For 2004, election officials for some states identified various state and local directives for managing the security and reliability of their voting systems, including security plans, security testing, system acceptance testing, and voting equipment auditing. When appropriately defined and implemented, such directives can promote the effective execution of security and testing practices across all phases of the election process. As voting technologies and requirements evolve, states and local jurisdictions face the challenge of adapting and implementing the directives to meet the needs of their specific election environments. As previously stated, jurisdictions need to manage the triad of people, processes, and technology as interrelated and interdependent parts of the total voting process. Given the amount of time that remains between now and the November 2008 elections, jurisdictions’ voting system performance is more likely to be influenced by improvements in poll worker system operation training, voter education about system use, and vote casting and counting procedures than by changes to the physical systems. The challenge for voting jurisdictions is thus to ensure that these people and process issues are dealt with effectively. In this regard, the election management decisions and practices of states and local jurisdictions can benefit from the experiences and results of those with comparable election environments. In 2004 and again in 2006, EAC compiled such information into guidance documents for widespread use by election officials. However, as the election environment and voting systems continue to evolve, additional lessons and topics will undoubtedly surface. Accordingly, we have recommended that the EAC establish a process and schedule for periodically compiling and disseminating recommended practices for security and reliability across the system life cycle and that the practices be informed by information it collects on the problems and vulnerabilities of these systems. Incorporating the feedback obtained through actual voting system development, acquisition, preparation, and operations into practical guidance will allow the election community to be more robust and efficient. Reliable measures and objective data are needed for jurisdictions to know whether the technology they use is meeting the needs of the user communities (both the voters and the officials who administer the elections). While the vast majority of jurisdictions reported that they were satisfied with the performance of their respective technologies in the November 2004 elections, this satisfaction was based mostly on the subjective impressions of election officials rather than on objective data that measured voting system performance. Although these impressions should not be discounted, informed decision making on voting system operations and technology investment requires more objective data. The immediate challenge for jurisdictions is to define measures and begin collecting data so that they can definitely know how their systems are performing. States and local jurisdictions can benefit from sharing performance data on voting systems, including information on problems and vulnerabilities. However, the diffused and decentralized nature of our election system impedes timely and accurate collection and dissemination of this type of information for particular voting system models. Accordingly, we have recommended that the EAC develop a process and associated time frames for sharing information on voting system problems and vulnerabilities across the election community. The national voting system certification process established in January 2007 provides a mechanism for election officials to report problems and vulnerabilities with their systems to the EAC. Not yet defined are the mechanisms to collect and disseminate information on problems and vulnerabilities that are identified by voting system vendors and independent groups outside of the national certification process. In addition, foreseeable changes in technology require jurisdictions to determine whether a particular technology will provide benefits over its useful life that are commensurate with life-cycle costs (acquisition as well as operation and maintenance) and to assess whether these collective costs are affordable and sustainable. Thus, the long-term challenge for jurisdictions is to view and treat voting systems as capital investments and to manage them as such, including basing decisions on technology investments on clearly defined requirements and reliable analyses of quantitative and qualitative return on investment. In closing, I would like to say again that electronic voting systems are an undeniably critical link in the overall election chain. While this link alone cannot make an election, it can break one. The problems that some jurisdictions have experienced and the serious concerns that have surfaced highlight the potential for continuing difficulties in upcoming national elections if these challenges are not effectively addressed. The EAC plays a vital role related to ensuring that election officials and voters are educated and well informed about the proper implementation and use of electronic voting systems and ensuring that jurisdictions take the appropriate steps— related to people, process, and technology—that are needed regarding security, testing, and operations. More strategically, the EAC needs to move swiftly to strengthen the voting system standards and the testing associated with enforcing them. However, the EAC alone cannot ensure that electronic voting system challenges are effectively addressed. State and local governments must also do their parts. Moreover, critical to the commission’s ability to do its part will be the adequacy of resources at its disposal and the degree of cooperation it receives from entities at all levels of government. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information, please contact Randolph C. Hite at (202) 512-3439 or by e-mail at [email protected]. Other key contributors to this testimony were Neil Doherty, Nancy Glover, Paula Moore, and Kim Zelonis. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the 2000 national elections, concerns have been raised by various groups regarding the election process, including voting technologies. Beginning in 2001, GAO published a series of reports examining virtually every aspect of the elections process. GAO's complement of reports were used by Congress in framing the Help America Vote Act of 2002, which, among other things, provided for replacement of older voting equipment with more modern electronic voting systems and established the Election Assistance Commission (EAC) to lead the nation's election reform efforts. GAO's later reports have raised concerns about the security and reliability of these electronic voting systems, examined the EAC's efforts to address these concerns, and surveyed state and local officials about practices used during the 2004 election, as well as plans for their systems for the 2006 election. Using its published work on electronic voting systems, GAO was asked to testify on (1) the contextual role and characteristics of electronic voting systems, (2) the range of security and reliability concerns that have been reported about these systems, (3) the experiences and management practices of states and local jurisdictions regarding these systems, and (4) the longstanding and emerging challenges facing all levels of government in using these systems. Voting systems are one facet of a multifaceted, year-round elections process that involves the interplay of people, processes, and technology, and includes all levels of government. How well these systems play their role in an election depends in large part on how well they are managed throughout their life cycle, which begins with defining system standards; includes system design, development, and testing; and concludes with system operations. Important attributes of the systems' performance are security, reliability, ease of use, and cost effectiveness. A range of parties knowledgable about elections or voting systems have expressed concerns about the security and reliability of electronic voting systems; these concerns can be associated with stages in the system life cycle. Examples of concerns include vague or incomplete voting system standards, system design flaws, poorly developed security controls, incorrect system configurations, inadequate testing, and poor overall security management. For the 2004 national elections, states' and local governments' responses to our surveys showed that they did not always ensure that important life cycle and security management practices were employed for their respective electronic voting systems. In particular, responses indicated that the most current standards were not always adopted and applied, security management practices and controls were employed to varying degrees, and certain types of system testing were not commonly performed. Moreover, jurisdictions' responses showed that they did not consistently monitor the performance of their systems. In GAO's view, the challenges faced in acquiring and operating electronic voting systems are not unlike those faced by any technology user--adoption and application of well-defined system standards; effective integration of the technology with the people who operate it and the processes that govern this operation; rigorous and disciplined performance of system security and testing activities; reliable measurement of system performance; and the analytical basis for making informed, economically justified decisions about voting system investment options. These challenges are complicated by other conditions such as the distribution of responsibilities among various organizations and funding opportunities and constraints. Given the diffused and decentralized allocation of voting system roles and responsibilities across all levels of government, addressing these challenges will require the combined efforts of all levels of government, under the leadership of the EAC. To assist the EAC in executing its leadership role, GAO has previously made recommendations to the commission aimed at better planning its ongoing and future activities relative to, for example, system standards and information sharing. While the EAC agreed with the recommendations, it stated that its ability to effectively execute its role is resource constrained.
The FAA’s mission is to provide a safe and efficient national aerospace system. FAA’s key aviation functions include regulating compliance with civil aviation safety standards and air commerce, operating the national air traffic management system, and assisting in the development of airports. The achievement of FAA’s mission is dependent in large part on the skills and expertise of its workforce. FAA consists of nearly 50,000 people, organized into 5 lines of business and several staff offices. Its workforce provides aviation services including air traffic control, maintenance of air traffic control equipment, and certification of aircraft, airline operations and pilots. FAA’s human resource management office is responsible for managing agencywide implementation of personnel reform and providing policy and guidance to regional human resource management divisions that manage the implementation of personnel reform within their areas of responsibility. In September 1993, the National Performance Review concluded that federal budget, procurement, and personnel rules prevented FAA from reacting quickly to the needs of the air traffic control system for new and more efficient equipment and flexibilities for attracting and hiring staff. In May 1994, building on these concerns, Congress directed the Secretary of Transportation to undertake a study of management, regulatory, and legislative reforms that would enable FAA to provide better air traffic control services without changing FAA’s basic organizational structure. The resulting FAA report to Congress, issued in August 1995, concluded that the most effective internal reform would be to exempt FAA from most federal personnel rules and procedures. In reporting on FAA’s request for these exemptions in October 1995, we concluded that, if the Congress decided to provide FAA with new personnel authority, the agency could be used to test changes before they were applied governmentwide. At that time, we emphasized the importance of establishing goals prior to the application of the new authority, noting that an evaluation of FAA’s efforts after some experience had been obtained would be important for determining the success of the effort and its governmentwide applicability. On November 15, 1995, Congress, in making appropriations for the Department of Transportation, directed the FAA Administrator to develop and implement a new personnel management system. The law exempted FAA from most provisions of title 5 of the United States Code and other federal personnel laws. The law required that FAA’s new personnel management system address the unique demands of the agency’s workforce, and, at a minimum, provide greater flexibility in the compensation, hiring, training and location of personnel. Subsequent legislation reinstated title 5 requirements related to labor-management relations, and the Federal Aviation Reauthorization Act of 1996 placed additional requirements on FAA by requiring that any changes made to FAA’s personnel management system be negotiated with the agency’s unions. Accordingly, compensation levels became subject to negotiations with employee unions. On April 1, 1996, FAA introduced its new personnel management system. In January 2001, we designated strategic human capital management as a governmentwide high-risk area.As our January 2001 High-Risk Series and Performance and Accountability Series reports make clear, serious human capital shortfalls are eroding the ability of many agencies, and threatening the ability of others, to economically, efficiently, and effectively perform their missions. In 2002, our studies of human capital management in the federal government identified a variety of elements—critical success factors and practices for effective implementation of flexibilities—that are important for consideration of federal human capital management efforts. For example, systems to gather and analyze data, performance goals and measures, linkage between human capital management goals and program goals of the organization, and accountability are among the elements that we have identified as essential for effective strategic human capital management. Appendix III provides an overview of our March 2002 model for strategic human capital management and key practices for federal agencies’ effective use of human capital flexibilities we identified in December 2002. Many of these elements relate directly to weaknesses we have identified in our recent reviews of FAA. For example, in July 2001, we reported that a lack of performance measurement, evaluation, and rewards hindered the effectiveness of rulemaking reforms. In October 2001, we reported that the overall effectiveness of FAA’s training for air traffic controllers was uncertain and that FAA had not measured productivity gains from changes in controllers’ duties. We reported in June 2002 on FAA’s difficulties in acquiring and developing staff to meet agency needs through air traffic control workforce planning. Most recently, we reported in October 2002 on the inability of air traffic control management to determine the impact of new relocation policies because of a lack of baseline data. Once exempted from most provisions of title 5, FAA initiated a broad set of personnel changes. For the purposes of this report, we grouped them into the areas of compensation and performance management, workforce management, and labor and employee relations. Figure 2 shows some of the major initiatives in each area, as well as whether they required exemptions from title 5 personnel rules. FAA required exemption from title 5 rules in order to implement its new, broadbanded pay structure. Before obtaining that exemption, FAA paid its employees according to the General Schedule (GS) pay system mandated by title 5. In its 1995 report to Congress, FAA stated that the GS pay system—which rewarded employees for their length of service, rather than for their competencies, skills, or accomplishments—resulted in multiple levels of supervisors at the same grade level and pay range, an inability to grant pay increases until statutorily mandated time or experience requirements were satisfied, and the administrative burden of administering about 35 special GS pay rates that were exceptions to regular pay ranges. The Office of Personnel Management (OPM) echoed these concerns in an April 2002 report. OPM concluded that the GS system’s narrow pay ranges, time-based pay progression rules and across-the-board delivery of annual increases was not effective in promoting performance- based pay. Once exempted from these provisions of title 5, FAA replaced the traditional grade and step pay system with a broadbanded pay structure that provides for a wider range of pay and greater managerial flexibility to attract, retain, and reward employees. The new pay band system includes plans tailored to specific employee segments: a core compensation plan for the majority of nonunion employees and negotiated versions of the core compensation pay plan for employees represented by unions; a unique pay plan for air traffic controllers and air traffic managers; and an executive pay plan for nonpolitical executives, managers, and some senior professionals. To illustrate the pay band system, under core compensation, the GS 15- grade pay schedule and step pay increases were replaced with a system in which employees are placed in a pay band under nine job categories including a specialized category that comprises eight specialized occupations. Each career category contains two to five pay bands. Each pay band represents a minimum and a maximum range of pay. For example, the base pay for a band “D” clerical support employee is at least $23,600 but no more than $35,400. Figure 3 shows the distribution of pay bands for career level job categories under core compensation. (For a more detailed comparison of the GS system and core compensation plan, see app. IV.) In its 1995 report to Congress, FAA reported that the federal performance management system under title 5 limited the ability of agency managers to reward their best employees. After being exempted from this system, FAA incorporated performance management elements into the new compensation system to encourage results-oriented behavior and to recognize and reward performing employees via permanent annual salary increases. For example, under its core compensation plan, all employees are eligible for a permanent pay increase, called an organizational success increase, based on the Administrator’s assessment of the extent to which the entire agency has achieved its annual agency goals. In addition, notably high-performing individuals may receive an additional permanent pay increase, called a superior contribution increase, based on supervisory recommendation. FAA has criteria for awarding superior contribution increases. These criteria include collaboration, customer service and impact on organizational success. Additional criteria may be used by some lines of business and staff offices because of their unique needs. FAA is not required to grant cost of living allowances or locality pay increases but elected to continue providing these pay adjustments, which are generally applicable to the federal pay system. FAA’s 1995 report to Congress also stated that the federal performance management system limited the ability of agency managers to deal with unacceptable performance. FAA’s legislative exemption from title 5 enabled the agency to establish its new performance management system. According to human resource management officials, this system focuses on human capital development by helping to make individual employees aware of their roles and responsibilities in helping the agency achieve its program goals and provides ongoing feedback and written evaluations to improve individual employee performance. The new performance management system incorporates a variety of feedback approaches in addition to traditional supervisor-to-employee feedback, including performance plans that discuss managers’ and employees’ agreements regarding job expectations and feedback from the employee to the supervisor. At the end of the performance evaluation cycle, employees receive a narrative performance summary instead of a year-end rating that defines employees’ performance in specific categories. The performance summary reflects an assessment of achievements based on outcomes and expectations, while professional competencies such as collaboration and customer service are elements of the new compensation system. As a result, the performance management system is not directly linked to pay for performance elements of FAA’s new compensation system. While FAA’s program documentation described union involvement and the use of employee focus groups in the development of the system, FAA did not systematically validate the final version of the performance management system with all employees before beginning implementation in 2002. Human resource officials said they planned to validate the new system by obtaining employee input through an employee attitude survey in 2003 and through continuing negotiations with employee unions and that these would allow for continuing refinements. Some of FAA’s workforce management reform initiatives required exemption from title 5 while others did not. For example, FAA’s workforce planning initiative did not require an exemption from title 5. On the other hand, changes in procedures governing hiring and locating staff, as well as some training initiatives, such as fee-for-service training programs, did require exemptions from title 5. In requesting exemption from title 5 requirements governing hiring and locating staff in 1995, FAA cited inefficiencies of working through OPM to hire and geographically place qualified staff at key facilities or to reassign employees in response to changing needs. According to an FAA staffing task force, the agency had lost highly qualified candidates because managers could not fill jobs in a timely manner. FAA estimated that it took an average of 6-8 months to bring a new hire onboard from outside the federal service using OPM as a hiring source and that it took, on average, 60 days to permanently fill a position internally. FAA also considered OPM allocations for executive positions excessively rigid, as any increases to the allocation provided had to be supported by the Department of Transportation and approved by OPM. Moreover, FAA stated that the temporary internal movement process (from one FAA location to another), also governed by OPM regulation, was equally inflexible because it limited the duration of temporary assignments, and imposed onerous processing requirements. The movement process required paperwork to be processed every 120 days and could require up to seven separate personnel actions for a 2-year temporary assignment. FAA’s 1995 request for flexibilities in the area of training was based on perceived redundancies and inefficiencies in its training programs. According to an FAA personnel reform training task force report in 1996, centralized agency training programs required by title 5 provided standard training that did not always address specific business needs. FAA also requested exemption from title 5 in order to have flexibility to provide unfunded or partially funded moves of employees to locations where they and their skills are most needed. According to FAA Air Traffic Services and human resource management officials, FAA historically interpreted title 5 rules as a requirement to fully reimburse all Permanent Change of Station (PCS) moves since the agency considered all such moves to be in the interest of the federal government. After Congress provided FAA with its new flexibilities, FAA developed a new framework for workforce planning to guide executive, occupational, and managerial/supervisory workforce planning. This did not require an exemption from title 5. With regard to hiring, FAA used its exemption from title 5 to establish hiring policies that allow FAA to hire applicants directly from outside the government and from other federal agencies without going through OPM. To do so, FAA established three hiring approaches: (1) using centralized registers, (2) announcing vacancies, and (3) authorizing on-the-spot hiring. According to FAA human resource management officials, the agency also used its exemption from title 5 to streamline staffing by decreasing the number of appointment types from 14 to 2 (temporary and permanent) and hiring authorities from approximately 500 to 1. FAA also established a flexible system for adjusting the number of executive positions in response to shifting agency priorities. This new system allows the Administrator to establish new executive positions and reassign and select the top management team. In the area of training, FAA (1) delegated responsibility for managing training funds and programs to its lines of business, (2) allowed users to select training from multiple providers, (3) created fee-for-service training programs, and (4) provided broader authority to fund degree programs for employees. The latter two initiatives required exemptions from title 5. Another area of workforce management for which FAA used its exemption from title 5 requirements was relocating employees. As part of its reform, FAA delegated the authority to determine eligibility for and amount of benefits to each line of business and provided three PCS funding options: (1) full PCS reimbursement, (2) fixed relocation payments, and (3) unfunded moves. As before reform, if the move is in the interest of the government, FAA will fully reimburse the individual for costs associated with the move. Under the new PCS rules, if FAA determines that it will derive some benefit from a move, even though the move is not in the interest of the government, the agency may offer a fixed relocation payment of up to $25,000. If a move is not in the interest of the government and FAA does not determine that it will derive some benefit from the move, there is no basis for offering PCS funding. However, as a result of FAA’s personnel reform, employees may choose to make unfunded moves at their own expense for personal reasons, to gain experience needed for professional advancement, or for promotion. FAA was ultimately not exempted from title 5 requirements governing labor-management relations. As part of its overall reform effort, it undertook several initiatives in the area of labor relations that did not require exemption from title 5. For example, FAA and its unions established a new forum—the National Labor Management Partnership Council—for union representatives and senior management to exchange information and ideas. To improve overall employee relations, FAA also established a new forum for nonunion employees to facilitate communications between employees and FAA management that also did not require an exemption from title 5. Similarly, in consultation with union and nonunion employee groups, FAA developed a new policy promoting a Model Work Environment to create and maintain an effective working environment for its employees by managing diversity and practicing equal employment opportunity and affirmative action. In addition, on July 1, 1998, FAA established an Accountability Board to standardize procedures to insure management's uniform and effective handling of sexual harassment allegations and related misconduct of a sexual nature. In July 2000, the scope of the Board was expanded to include harassment and other misconduct that creates or may create an intimidating, hostile or offensive work environment based on race, color, religion, gender, sexual orientation, national origin, age and disability. The establishment of the Board did not require exemption from title 5. FAA required exemption from title 5 to establish the Guaranteed Fair Treatment Program, an alternative dispute resolution method in which a three-person review panel adjudicates employee grievances. FAA intended the new program to be the only method by which employees not covered by a union agreement could seek administrative reviews of grievances and to replace the traditional approach under title 5 rules involving the Merit Systems Protection Board. (As discussed later, FAA was later required by Congress to reinstate the traditional title 5 process and now offers employees the choice of the two processes for resolving disputes.) While FAA has completed many of the initiatives that required changes to policy and procedures, it has not yet completed implementation of some of the more complex elements of the personnel reform it began in 1996, specifically compensation and performance management systems and workforce planning initiatives (see fig. 4). FAA officials said that the diversity of skills and duties of FAA’s workforce as well as negotiations with unions that represented a large number of employees has slowed somewhat the pace and extent of implementation of compensation and performance management initiatives. According to the Assistant Administrator for Human Resource Management, FAA’s implementation strategy was to establish a broad policy framework and then focus incrementally on individual elements of reform to eventually achieve full implementation. Between April 1996 and October 1998, for certain workforce management and labor and employee relations initiatives, FAA defined the new flexibilities available through agencywide “corporate” policies and then empowered the individual lines of business to adapt and make use of the new tools as appropriate. According to human resource management officials, these initiatives helped FAA “jump-start” its reform effort, while other reform initiatives, such as compensation, required varying incremental degrees of development because of the diverse characteristics of FAA’s workforce. Human resource management officials said other initiatives, such as workforce planning, were considered to be of a lower priority in terms of implementation. As of September 30, 2002, FAA had fully implemented its broadbanded compensation plans, including the performance incentive increases, for about three-quarters of the agency’s workforce. About 8,000 nonunion employees are paid under the core compensation plan, all senior executives (about 180) are paid under the executive compensation plan, and about another 9,000 employees represented by three of FAA’s nine unions are paid under negotiated versions of the core compensation plan. Because the performance incentive elements of the new system were not incorporated until late 2001, fiscal year 2002 will be the first year in which all employees under core compensation experience a full cycle with all the elements of its reformed compensation system fully in place. In addition, more than 19,000 air traffic controllers are paid according to a specialized, negotiated pay plan that includes pay banding and superior contribution increases. The remainder of FAA’s workforce (about 13,000), most notably those union employees whose union has not reached a new agreement with FAA, continues to be compensated under the traditional GS grade and step system under title 5 rules, as shown in figure 5. The implementation of FAA’s new performance management system has not yet been completed for most FAA employees. In 1995, prior to its reform effort and in response to new performance management regulations issued by OPM, FAA decided to establish a separate way of managing performance. At this time, it uncoupled its performance management system from its compensation system, based performance appraisals on a two-tiered evaluation (“meets expectations” or “does not meet expectations”) of employees’ performance against performance standards, provided for year-end summary ratings, and established supplemental criteria (such as making a significant contribution to the efficiency, economy, or improvement of government operations) to use as a basis for merit pay. In 1999, as part of its reform effort, FAA began development of a new performance management system. This new system consists of a narrative evaluation of employees’ performance against performance standards combined with feedback and coaching. The new performance management system does not provide for a year-end summary rating or a basis for merit pay. Instead, the new compensation system includes criteria that are separate and distinct from the performance management system (such as collaboration, customer service, and impact on organizational success) for awarding merit-based pay raises, which are called superior contribution increases. FAA implemented this new performance management system on October 1, 2001, for the Office of Human Resource Management, the Office of Regions and Center Operations, and the Regulation and Certification line of business. Since October 2001, additional staff in a variety of FAA organizations have been placed under the new performance management system, bringing the total under the system to about 20 percent of FAA’s total workforce. As with the compensation system, the new performance management system must be included in the negotiated agreements with FAA’s employee unions. FAA implemented most workforce management initiatives in 1996 by defining the flexibilities available through agencywide “corporate” policies and empowering the individual lines of business to adapt and make use of the new tools as appropriate for their staff. The individual lines of business adapted agencywide policies detailing the flexibilities available for hiring, training, and relocating employees by issuing parallel policies to guide their respective workforces and address any applications unique to their staff. While FAA established similar agencywide corporate policies and guidance for developing workforce plans for three staff levels— executive, managerial and supervisory, and occupational—this initiative is still under way. FAA began its executive workforce planning in November 2000. Development of Individual Development Plans for executives—the final element of the executive workforce planning effort— was originally scheduled to be finalized in August 2001 but was still under way at the time of our review. FAA has not yet initiated its managerial and supervisory workforce planning effort. This effort is set to begin in fiscal year 2003. FAA’s occupational workforce planning, which was originally scheduled to be completed in September of 2001, was still under way at the time of our review. Human resource management officials said that four of the five lines of business—Airports, Air Traffic Services, Regulation and Certification, and Research and Acquisitions—had completed their occupational workforce plans, and the remaining line of business— Commercial Space Transportation—was still developing a plan. FAA announced a series of agencywide policies governing labor and employee relations in 1996 that established the National Labor Management Partnership Council, the National Employees Forum, the Guaranteed Fair Treatment Program, and a policy promoting a Model Work Environment. FAA required less time to develop and implement these changes because comparable labor and employee representative groups were already in place prior to the reform effort and FAA had existing appeal processes and workplace improvement policies that served as a basis for the Guaranteed Fair Treatment Program and Model Work Environment. The variety of skills and areas of technical expertise represented in FAA’s workforce has affected the implementation of the agency’s new compensation plan. For example, the agency has a unique pay plan for air traffic controllers in the field based on the complexity of the facility, while FAA’s new core compensation plan is based on the duties and responsibilities of 16 different types of positions (ranging from students to pilots to physicians). The schedule for implementing changes in compensation and performance management has been dictated, in part, by the timing of negotiations with employee unions and the ability of FAA and its unions to reach agreement on the new systems. For example, because FAA’s contract with the National Air Traffic Controllers Association (NATCA), the organization representing FAA’s largest group of unionized employees, had expired, management had to negotiate a new agreement in 1998 before it had completed development of its new core compensation pay plan. While the air traffic pay plan, like the core compensation plan, is intended to include annual pay increases based on individuals’ performance, these performance-based increases have not been implemented as intended due to an unresolved dispute between NATCA and FAA management over the details of implementation. As a result, the air traffic pay plan distributed annual performance-based incentive pay equally among all union members for fiscal years 1999, 2000, and 2001, unlike the core compensation plan developed for the rest of the agency in which only higher performing individuals may receive performance-based incentive pay. At the time of our review, FAA and the air traffic controllers union had not yet determined how fiscal year 2002 and future years’ incentive pay increases would be allotted. According to human resource management officials, the new core compensation has not been negotiated for union employees that represent about 30 percent of FAA’s total workforce, and the need to negotiate the incorporation of compensation and performance management initiatives into union contracts has increased the length of time needed to negotiate some contracts. For example, before 1996, FAA and the Professional Airways System Specialists union took from 3 to 14 months to negotiate an agreement, but the negotiation time more than doubled to 29 months for the latest agreement. FAA and the National Association of Air Traffic Specialists have been attempting to negotiate a new contract since 1997, and the parties had not yet reached agreement at the time of our review. Labor relations officials attributed increases in negotiation times to the expanded scope of contract negotiations, which now includes negotiating compensation that historically was not negotiated. The performance management system has also not yet been implemented for most of the unionized segments of the agency’s workforce. According to FAA officials, 2,324 union employees in FAA’s Office of Regions and Center Operations and Office of Public Affairs, representing only about 5 percent of FAA’s total workforce represented by unions, were under the new system at the time of our review. FAA had little or no data on the effects of many of the reform initiatives. Human resource management officials cited positive effects of the reform initiatives in the areas of compensation and workforce management, while in the area of labor and employee relations, labor management officials provided a limited amount of data suggesting that labor relations had not improved. Managers and employees with whom we spoke in our interview effort generally cited less positive views on the effects of reform initiatives. FAA had not systematically collected or analyzed data to determine whether the new compensation system had achieved its objective of increasing the agency’s ability to attract and retain employees. Human resource management officials said the new compensation system had achieved this objective. They said the initiative had made the agency more competitive in hiring because FAA can now offer higher starting salaries within the wider-range of pay afforded by the pay bands. In addition, air traffic officials we spoke with said that the air traffic control pay plan has made it easier to staff hard-to-fill positions at busier air traffic facilities. They noted, however, that they did not have a definition for hard-to-fill positions and had not tracked the extent to which positions they might consider hard to fill had been filled more or less quickly since the new pay plan was instituted. In contrast, many FAA managers and employees we interviewed were critical of the new compensation system. Nearly two-thirds of those responding to our structured interview (110 of 176) disagreed or strongly disagreed that the new pay system is fair to all employees. While we did not attempt to evaluate the concerns raised during interviews, we did find some evidence that helps explain these perceptions of unfairness. For example, concerns about air traffic controller pay disparities are supported by a Department of Transportation Inspector General report. This report found that FAA’s initial implementation of the new compensation system led to inequities in pay between air traffic managers, supervisors, and specialists in field facilities, who are covered by the air traffic pay plan that FAA negotiated with NATCA in October 1998, and a much smaller group of air traffic managers and supervisors in regional and headquarters locations, who (together with other FAA managers and employees) are covered by the new core compensation plan. Because of differences between the two plans, managers and employees transferring from regional and headquarters locations to field facilities were not eligible for the same pay increases as those who were already assigned to field facilities in October 1998. To address this situation, FAA issued new guidance in July 2001 that established consistent rules for setting pay when employees move within and among the various pay systems in FAA, including movements between field positions and positions in regional offices and headquarters. Even so, perceptions of unfairness persist. According to the President of the FAA Conference Manager’s Association (FAACMA), the new guidance created the perception among some managers and employees of a financial disincentive for air traffic controllers to move from field facilities to regional offices or headquarters to gain supervisory and managerial experience. Further, the FAACMA President, as well as some controllers with whom we spoke, stated that such a move would result in a significant loss of pay—generally about $10,000 to $20,000. “Because of pay discrepancies,” one regional air traffic manager said, “we can’t get highly paid employees to move over to management positions.” Human resource management officials said that, while some field employees who move to positions in regional offices or headquarters would see a pay reduction of $10,000 or $20,000, not all such moves would result in such a pay reduction. According to our review of FAA’s July 29, 2001, guidance, an unfair disparity in pay between air traffic controllers would be created only when managers and employees were paid above or below established pay bands. At our request, FAA analyzed the salaries of its air traffic control staff and determined that 327, or fewer than 2 percent, of about 20,000 controllers (including supervisors, managers, and employees) were paid above current pay band maximums. (FAA’s analysis did not identify any staff being paid below established pay band minimums for their positions.) When we compared the distribution of 2002 base pay for all air traffic controllers in field facilities and in regional and headquarters facilities, we found that the regional and headquarters controllers are generally paid less under core compensation than the field controllers are paid under the air traffic pay plan. As shown in figure 6, the percentage of controllers paid between $100,000 and $130,000 is smaller in the regions and in headquarters than in the field. This is consistent with FAA’s goal of providing higher levels of pay to controllers in an operational environment. In addition, the percentage of controllers paid between $60,000 and $80,000 is greater in the regions and in headquarters than in the field. According to human resource management officials, the pay rates of many field employees and supervisors can be accommodated within the pay ranges of regional office and headquarters management positions, as shown above. Thus, they said that pay discrepancies should not affect the ability to entice field employees to move into management positions. However, it is understandable that some air traffic managers and controllers perceive a financial disincentive for moving from the field to a regional office or headquarters because, although the range of pay under both systems is comparable, the number of higher paid positions is greater in the field than in the regional offices or headquarters. To the extent that these perceptions persist, FAA may find it more difficult to place its most experienced air traffic managers in regional offices and headquarters. However, this disparity is consistent with FAA’s goal of basing pay on the operational environment and is explicitly stated in FAA’s July 2001 pay plan for air traffic managers and controllers. A general perception of unfairness regarding FAA’s new compensation system has led to increased unionization among FAA employees outside of the air traffic services line of business as well as within it, according to both internal and external sources. FAA human resource officials said that considerable unionization began before such systems as core compensation were implemented and that most concerns cited during unionization efforts were of uncertainty and loss of guarantees, not of unfairness. However, the introduction of the pay system corresponded with an acceleration in the increase in employees seeking union representation after FAA began its reform effort. For example, employees represented by unions (as a percentage of FAA’s total workforce) increased from 63 percent prior to the reform in 1995 to 66 percent in 1998 and to 79 percent by 2001. FAA labor relations officials and FAA spokespersons for new unions at FAA told us that a perceived inequity regarding pay was the prime reason new unions were formed. A 1999 study by the National Academy of Public Administration (NAPA) also found that real and perceived inequities in levels of pay were “major contributors to the view among a growing number of employees that you must belong to a union to get your fair share.” A more recent FAA study in 2001 likewise acknowledged that the new pay system “may be one possible explanation” for the increase in unionization. Between 1998—when FAA began testing and implementing its new pay system—and 2001, the number of employees choosing representation by unions increased nearly 20 percent (from about 32,800 to more than 38,800 employees). Figure 7 shows the number of FAA employees represented by unions from 1991 through 2001. Because FAA had not completed a full appraisal cycle for staff under its new performance management system at the time of our review, FAA had little data, and we were not able to obtain the views of managers and employees on the effects of the new system. We noted that FAA’s performance management approach does not use a multi-tiered rating system to rate performance. We have previously raised concerns that such approaches may not provide enough meaningful information and dispersion in ratings to recognize and reward top performers, help everyone attain their maximum potential, and deal with poor performers. According to human resource management officials, the compensation system provides a means of recognizing and rewarding top performers through separate assessments not directly linked to performance assessments under the performance management system. The measurable element related to performance management is the number of employees that receive superior contribution increases under FAA’s new compensation system. About 20 percent of employees are to receive the highest superior contribution increases (1.8 percent addition to base pay) and 45 percent are to receive the next highest level of superior contribution increases (0.6 percent increase in base pay). Whereas human resource management officials provided some limited data to support their views that reform initiatives had improved the agency’s flexibility in hiring and relocating employees, the managers we spoke with were less likely to see positive results. According to human resource management officials, FAA’s use of the new hiring flexibilities, though restricted by hiring freezes, has reduced external hiring times from an average of 6 months to as little as 6 weeks. They said the examples they provided for air marshal hiring were intended to provide an illustration that the policies allow positions to be filled quickly, even in the case of large recruitment efforts. However, the Department of Transportation’s Office of Inspector General, when reviewing FAA’s personnel reform in 1998, questioned FAA’s ability to support this assertion in the absence of data. (See fig. 8.) Throughout our review, we asked FAA officials from both the human resource management office and lines of business for any documentation or data to support the reduction in hiring times and they were unable to provide any such data. At the close of our review, however, human resource management officials cited some limited data resulting from the Federal Air Marshal Program. According to FAA human resource management officials, the program following the terrorists’ attacks on September 11, 2001, was one of the largest recruitment efforts ever undertaken by FAA. (FAA received and processed more than 200,000 applications.) According to FAA officials, it would not have been possible to fill the air marshal positions in the numbers and time frames required without the flexibilities available under FAA’s personnel system. They provided data reflecting a sample of approximately 1,000 candidates for the air marshal positions. Of those candidates hired, about 30 percent (140) were hired and placed within 6 weeks. In total, 70 percent (333) were hired and placed within 8 weeks. In contrast to the positive views of human resource management officials, FAA managers had less positive views on the effects of hiring reforms, while employees, who are less involved in the hiring process, had mixed views. Among the 46 managers we interviewed, only about a third (15) agreed or strongly agreed that the initiatives have improved the ability of their line of business or staff office to fill job vacancies. Furthermore, only 12 of the 46 managers believed the speed of hiring has improved. These opinions, while not necessarily representative of all FAA managers today, are similar to the views expressed by FAA managers in 1998. According to a survey FAA conducted then, 34 percent of managers responding said that FAA’s streamlined staffing procedures had made it easier to fill vacancies in their organization, and 32 percent said the speed of hiring had improved. Human resource management officials also said that new policies governing the relocation of employees had given managers more flexibility in relocating employees and employees more flexibility in making career decisions. Under these new policies, FAA may provide fixed relocation payments as well as full funding for PCS moves, and it allows unfunded moves, which were not allowed under FAA's prior policy. Figure 9 shows that the majority of moves between field offices for managers from fiscal year 1999 through 2001 (the only years and type of moves for which data were available) were unfunded. In contrast to the positive views of FAA human resource management officials, FAACMA representatives raised concerns about the impact of the new policies in the air traffic services line of business, suggesting that they might have unintended consequences, including a reduction in the number of qualified applicants, a reduction in the diversity of potential applicant pools and subsequent discrimination in filling positions, and a negative impact on employee morale if fluctuations in the annual funding for relocation payments led to disparities in the payments for comparable moves over time. Air traffic officials said they were still reviewing these concerns and planned to comment in the near future. Although FAA’s Office of Labor Relations did not have historical, agencywide data to quantify an increase in grievances, FAA labor management officials said the number of grievances filed at the national level by employees represented by unions had increased and this increase was a sign that the initiatives had not achieved the reform objective of establishing a collaborative labor-management relations environment that would minimize the traditional adversarial relationship. They said that the number of grievances filed began to increase following personnel reform changes the agency had made. For example, they noted that in 1999, the core compensation plan was implemented and grievances increased. However, human resource officials said that grievances by union employees could not have pertained to implementation of the compensation pilot because the pilot test only applied to nonunion employees, not to union employees. The Office of Labor Relations implemented a new system for tracking grievance data in October 2001 and began systematically collecting information on the sources (such as headquarters, regions, and unions) and subject (such as compensation, use of leave, and discipline) of grievances filed across the agency. While limited data suggested that FAA’s introduction of an alternative dispute resolution program for employees not represented by unions did reduce the processing times for resolving appeals, employees’ reactions to the new system suggest that many employees did not see this initiative as an improvement. FAA introduced its internal alternative dispute resolution approach—the Guaranteed Fair Treatment Program—in April 1996 in an effort to streamline the appeals process. This approach met with resistance from employees and led Congress, in 2000, to reinstate the traditional title 5 process that uses the Merit Systems Protection Board. As a result, FAA now offers employees the choice of using either the guaranteed fair treatment program or the traditional title 5 process. The only data human resource officials were able to provide on appeals dated back to fiscal year 1997. Although these data are old, they indicated that for fiscal year 1997, appeals went through the guaranteed fair treatment process more quickly (5 to 7 months) than through the Protection Board process (10 months). Even so, the Deputy Assistant Administrator for Labor Relations said that employees, who have been able to choose between the two processes, have generally not chosen to use the guaranteed fair treatment process. He said that one reason employees have not used the guaranteed fair treatment process is because its potential benefits, such as the employee’s right to help select the arbitrator, have not been effectively communicated to them. In addition, according to the Deputy Assistant Administrator, both FAA managers and union leaders have complained about having to pay the cost of the arbitrator, while employees have complained about having to pay their own legal fees for attorneys regardless of the outcome of the appeal. FAA reimburses an employee’s legal fees if the employee wins his/her appeal when using the Protection Board's process. Most FAA managers and employees we interviewed said that labor and employee relations had changed in the last 5 years. For example, 130 of the 176 managers and employees we interviewed agreed or strongly agreed that labor-management relations had changed in the last 5 years. Of those 130, 75 said that labor-management relations had declined. Similarly, 130 of the 176 managers and employees we interviewed said employee morale had changed in the last 5 years, and of those 130, 99 said that employee morale had declined. While employees’ perceptions regarding the changes in labor and employee relations cannot be linked directly to FAA’s personnel reform, some employees cited specific reform initiatives, such as compensation and the Model Work Environment established to improve employee relations, when discussing the decline of labor-management relations and morale. Union representatives for three of FAA’s nine unions said that a complaint filed against FAA by the Federal Labor Relations Authority (FLRA) in March 2001 had reduced collaboration between labor and management. FLRA charged FAA with bargaining in bad faith because it had refused to sign an agreement negotiated with the American Federation of State, County and Municipal Employees, a union that represents employees at FAA headquarters. FAA management did not sign the agreement and submitted it instead to the Office of Management and Budget (OMB) for review. OMB subsequently disapproved some portions of the contract. Following an investigation of the circumstances, FLRA directed FAA management and the union to sign and implement the contract. However, in September 2002, an administrative law judge recommended that FLRA dismiss the union's complaint, finding that FAA clearly gave notice to the union of the OMB approval condition and that the union agreed to that condition. In the area of employee relations, FAA provided us with some data that may support the views of FAA officials that the Model Work Environment has had a positive effect. A recent decline in the number of equal employment opportunity (EEO) complaints may, to an unknown extent, reflect the effects of FAA’s Model Work Environment. These complaints are concerns expressed by employees about legally prohibited discrimination on the basis of race, color, religion, sex, national origin, age, or handicap. An analysis by FAA’s Office of Civil Rights of data it had collected on the number and types of formal EEO complaints showed that while such complaints increased in the years immediately following the implementation of the Model Work Environment in 1996, they began to decline 3 years later. As figure 10 shows, the number of EEO complaints increased from 412 in 1996 to 635 in 1998 and then declined to 485 in 2001. About three-quarters of the FAA managers and employees we interviewed (134 of 176) agreed or strongly agreed that they understood the goals of the Model Work Environment. These goals include reflecting diversity and eliminating discrimination and harassment in the workplace, which are common causes of equal employment opportunity complaints. While some employees cited positive effects of the program, other employees were skeptical of its impact. Figure 11 illustrates FAA employees’ divergent views on the Model Work Environment. Even though the decrease in the number of EEO complaints cannot be directly linked to the Model Work Environment initiative, the availability of data and analysis on EEO complaints could provide one objective basis for FAA to discuss the effects and assess the efficacy of this policy and address the concerns of those employees who view its impact less positively. FAA’s lack of empirical data on the reform effort’s effects is one indication that it has not fully incorporated elements that we and others have identified as important to effective human capital management into its reform effort. Systems to gather and analyze relevant data provide a basis against which performance goals and measures can be applied. FAA human resource management officials said that the agency should have spent more time to develop baseline data and performance measures before implementing the broad range of reforms but that establishing these elements was a complex and difficult task. They said FAA was under significant pressure to rapidly implement reforms and that one impact of FAA’s incremental approach to implementing the reforms was that baseline measures tended to change as more people were brought under the reformed systems. FAA also has not gone far enough in establishing linkage between reform goals and program goals of the organization, another element we have identified as important to effective human capital management. We found that the lack of these elements has been pointed out repeatedly in evaluations of FAA’s human capital reform effort, but FAA has not developed specific steps and time frames by which these elements will be established and used for evaluation. Incorporation of these elements could also help FAA build accountability into its human capital management. The lack of baseline and comparative data for analysis and the lack of performance goals and measures has made it difficult to objectively evaluate the effects or success of FAA’s reform effort. Systems to gather and analyze relevant data provide a basis against which performance goals and measures can be applied. FAA human resource management officials agreed that the agency should have spent more time to develop baseline data and performance measures before implementing the broad range of reforms but said that establishing measures and goals and reaching consensus on their use was a complex and difficult task with which all federal agencies struggle. They said the agency was under significant pressure to rapidly implement reforms, and that one impact of FAA’s incremental approach to implementing the reforms was that baseline measures tended to change as more people were brought under the reformed systems. Human resource management officials also said that, while FAA has not systematically collected data and analyzed results to identify the benefits of all of the reform initiatives, the Office of Human Resource Management has taken a number of steps since 1998 to increase evaluation and measurement of some human resource management activities and outputs. Actions they cited (in addition to the previously discussed evaluations of compensation implementation) included meeting with consultants, human resource managers and intergovernmental groups and providing briefings to FAA management. While we were in the final stages of our review, they prepared, in response to our request, an informal report that described the type of measures they were planning to, or had recently begun to apply as part of a “Balanced Scorecard” approach to assessing human resource management activities. The measures in the scorecard approach are based on existing sources of data—customer surveys conducted by the Department of Transportation and FAA employee attitude surveys—as well as new data related to the hiring process, such as the “Time to Fill” (a vacancy) questionnaire, results from employment selection feedback questionnaires, a survey for new recruits and, since December 1999, a separation survey for employees leaving the agency. Human resource officials said they had been “strategically refining” the employee attitude survey since 1995 to address key human capital issues, such as clarity of performance expectations and workforce planning. Our work on strategic human capital management in the federal government has found that many federal agencies have difficulties in defining goals and measures and developing and using performance information to evaluate the effectiveness of human capital management efforts but that high-performing organizations do so. In cases where evaluations show that sufficient progress is not being made, high- performing organizations use data to identify opportunities for improvement. Similarly, the National Association of Public Administration (NAPA) has reported the need for performance data, goals, and evaluation to determine progress, make midcourse corrections, and assign accountability for achieving the desired outcomes in federal human capital management efforts. NAPA reported that, in the absence of such systematic evaluation information, the human capital management process will be driven by anecdotal information that may, or may not, reflect the condition of human capital management in the organization. Elements we have identified as facilitating the success of improvement initiatives include establishing clear goals and objectives for the improvement initiative, concrete management improvement steps that will be taken, key milestones that will be used to track the implementation status, and cost and performance data that will be used to gauge overall progress. In addition to the lack of performance data, the performance goals and measures for personnel reform in FAA’s human resource management and strategic mission plans are qualitative and do not consistently lend themselves to measurement or assessment, as they are not specific, measurable, and time-based. For example, the goal related to reform in FAA’s 1999 human resource management strategic plan is to “ensure that FAA has the right people doing the right work at the right time at the right cost” and has the following measures associated with it: increased flexibility to pay competitive salaries; increased ability to attract and retain high performers; increased managerial flexibility to assign, locate, and manage the performance of employees more effectively; and decreased hire cycle time. This goal and its associated measures do not lend themselves to specific, quantitative, and time-based evaluation. For example, while “decrease hire cycle time” implies that hire cycle time will be measured as part of evaluating the achievement of this goal, it does not establish a quantitative basis for assessment or specify a period of assessment. A more specific, quantitative, and time-based measure might be to “decrease median or average hire cycle time by September 2003 by X percent (from median or average cycle time for fiscal year 2002) for Y percent of all new hires.” We reported on FAA’s weaknesses in developing and using performance information in our report on the results of governmentwide surveys of performance management issues in May 2001. In that report, we found that FAA managers we surveyed reported they did not consistently use performance measures or data and that FAA was worse than the rest of the federal government on multiple aspects of performance measurement and the use of performance information. For example, we found that the agency was statistically significantly lower than the rest of the government in the percentage of managers who reported that they had outcome, customer service, or quality performance measures; and in the percentage of managers who reported that they used performance information to set program priorities, allocate resources, adopt new approaches, or coordinate program efforts. At the time of our review, human resource management officials were still in the process of developing baseline data, performance goals and measures and were still working to identify potential linkages between its human capital management reforms and program goals of the organization. The types of data and measures proposed by human resource management officials are comparable to those that have been historically suggested— many of them since FAA initiated development of its personnel reform in 1995—and their implementation is an important effort. However, the balanced scorecard measurement approach proposed by human resource management officials focuses primarily on the work environment and processes within the Office of Human Resource Management and the hiring process rather than on the many other human capital management reform initiatives being implemented across the agency. According to FAA human resource management officials, the office had been working for more than a year to expand the scope of the scorecard to incorporate measures with wider implications for all of FAA in response to discussions with human resource managers and based on information from FAA customers and employees. Table 1 provides an overview of the balanced scorecard measures proposed by the human resource management office, highlighting those that focus on the activities and output of the Office of Human Resource Management. An expanded overview of these performance measures that includes areas of measurement and proposed data sources is provided in appendix V. Clearly linking an agency’s overall human capital management strategy to its program goals is another element we have identified as key to effective human capital management. In a 1997 review of FAA’s personnel reform, the Volpe National Transportation Systems Center highlighted this issue of linkage, as shown in figure 12. While FAA has taken some steps to link its human capital reform initiatives to its program goals, these steps do not go far enough to help the agency measure the reform’s success. Specifically, FAA incorporated various aspects of personnel reform into its 1999 strategic human resource management plan, which stated that performance measurement was to focus on attaining organization goals but did not establish the measures with which to do so. Similarly, FAA’s 2001 strategic plan, prepared under the Government Results and Performance Act, includes a goal for the agency to “fundamentally change the way it operates by implementing personnel reform” but does not explicitly link this goal for personnel reform to organizational program goals of aviation safety and system efficiency. Human resource management officials said that organizational and individual incentive goals established under the compensation system explicitly linked individual performance to agency goals including safety and system efficiency and that the standards for performance under FAA’s new performance management system directly reflect agency and organizational programmatic goals. Nonetheless, linkage between FAA’s personnel reform goals and the agency’s programmatic goals continues to be weakened by a lack of specific, quantitative, and time-based measures and goals. FAA’s lack of relevant data, analysis, and performance goals and measures has been repeatedly articulated since 1995 by other internal and external reviews of the reform effort. While these reviews have called for FAA to incorporate these elements into its reform effort, and several recent studies have also highlighted the issue of linkage, FAA has not established and carried out a plan with specific steps and time frames for doing so. A chronology of these studies is provided in table 2. Several of these studies also attributed problems related to a lack of ownership for the reform effort or a lack of accountability for implementation or results. For example, in 1999, the National Academy of Public Administration identified the lack of ownership for personnel reform as a challenge that must be resolved. (See fig. 13.) As shown in figure 14, a 1998 departmental review found that FAA had not clearly established accountability for implementation of the reform initiatives. According to the most recent assessment of the status of FAA’s personnel reform, published by a consultant in September 2002 and shown in figure 15, a lack of ownership and inconsistent support for personnel reform by FAA’s executive management team has impaired reform implementation efforts. Our work on effective human capital management at federal agencies has found that building accountability into an agency’s human capital approach is important to the effective use of human capital flexibilities. Furthermore, we have found that in high performing organizations, managers are held accountable for achieving strategic goals, and clearly defined performance expectations are in place to hold employees and teams at all levels accountable. Establishing systems for gathering performance data and incorporating specific, time-based performance measures and goals that are linked to the agency’s program goals into the reform effort would improve the agency’s ability to set more meaningful strategic goals for its human capital reform effort and more clearly defined performance expectations for its human capital management. Together, this would help the agency build accountability into the reform effort and its overall human capital management approach. Congress granted FAA flexibilities in its human capital management so that the agency could more effectively manage its workforce and achieve its mission. Yet, more than 7 years after the agency received broad exemptions from laws governing federal civilian personnel management, it is not clear whether and to what extent these flexibilities have helped FAA to do so. It is clear that FAA has faced significant challenges in implementing its human capital reform initiatives and evaluating the success of its effort. Challenges, including implementing reform initiatives throughout its workforce with a wide range of skills and negotiating agreements with employee unions, reflect difficulties that may be faced by other federal agencies that seek to implement human capital management flexibilities. FAA is not able to determine the effectiveness of its human capital reform initiatives because it has not incorporated key elements of effective human capital management into its effort thus far. While FAA has established preliminary linkages between its reform goals and the agency’s program goals, the lack of explicit linkage will make it difficult to assess the effects of the reform initiatives on the program goals of the organization even after data, measurable goals, and performance measures for human capital management efforts are established. FAA has acknowledged the importance of establishing these elements. It has repeatedly said that it is working to collect and analyze data and develop performance goals and measures, but it has not completed these critical tasks, nor has it established specific steps and time frames by which it will do so. As FAA moves forward, a more strategic approach to its reform effort would allow it to better evaluate the effects of its reform initiatives, use the evaluations as a basis for any strategic improvements to its human capital management approach, and hold agency leadership accountable for the results of its human capital management efforts. Doing so would also enable the agency to share its results with other federal agencies and Congress. In order to acquire the information needed to make more informed strategic human capital decisions and better ensure that FAA’s personnel reforms achieve their intended results in a timely fashion, we recommend that the Secretary of Transportation direct the FAA Administrator to develop empirical data and establish specific, measurable, time-based goals and performance measures related to these goals; and use them to evaluate the effects of the reforms on the agency’s human capital management, programs, and mission so that the agency can make any needed improvements. Developing these evaluation tools is particularly urgent for those initiatives, such as FAA’s new compensation system for air traffic employees, for which possible negative effects have been raised by employees; and FAA’s new performance management system. define and describe explicit linkages between human capital management reform initiatives and program goals of the organization. establish time frames by which data will be collected and analyzed and by which goals, performance measures, and explicit linkage will be established and used to evaluate the success of the reform initiatives and hold agency leadership accountable for the results of its human capital management efforts. We provided a draft of this report to the Department of Transportation for its review and met with Department of Transportation officials, including FAA's Assistant Administrator for Human Resource Management, to obtain their comments. The department officials generally agreed with the report's recommendations and indicated that the findings presented in the audit report would be useful as FAA moves forward with its human capital reforms. They also noted issues in three areas. First, these officials emphasized that implementing a new human capital system within an existing workforce presented FAA with a significant challenge, given the size of FAA's workforce, the large unionized population, and the variety of occupations and functions within the agency. Second, while these officials agreed that establishing more definitive measures and baseline data, as identified in our recommendations, are important in determining the effectiveness of the new human capital programs, they stated that they have been making significant progress in developing those measures. Third, in responding to our concern that FAA is not able to determine the effectiveness of its human capital reform initiatives because it has not incorporated key elements of effective human capital management into its effort, these officials told us that FAA used the results of its pilot testing and phased implementation approach to modify systems to ensure effectiveness before full implementation and that subsequent assessments were conducted to determine whether the programs were accomplishing the intended goals. They said that FAA already has substantial information to indicate that its new programs and initiatives are on the right track and should be effective in meeting the reform effort’s intent. As examples, they referred to reviews by NAPA and the consulting firm Deloitte & Touche, which they said had characterized FAA’s human capital reforms as “state- of-the-art.” The officials stated that FAA’s design process had been characterized in the NAPA review as yielding high-quality policies, and FAA’s reform effort had been characterized in the NAPA review as heading in the right direction and as “a change management issue that is unparalleled in the federal sector.” They further stated that Deloitte & Touche’s review had found that the guiding principles and objectives of FAA’s personnel reform were sound, and that some programs have already been largely successful, such as streamlined recruitment and staffing processes. Notwithstanding the characterizations in these assessments, both NAPA and Deloitte & Touche raised concerns about issues we found in our review, particularly FAA’s lack of baseline data and specific performance measures to assess the effectiveness of its reform effort and establish a basis for continuous improvement. Department officials also said that FAA’s new human capital system is consistent with the President’s Management Agenda and the Administration’s Human Capital Plan, and that other federal officials have touted the types of programs FAA developed and implemented as the wave of the future for the rest of the federal government. FAA emphasized that its agency is unique among federal agencies in implementing a performance-based and market-based pay system applicable to both nonunion and union employees, which clearly links annual pay adjustments to key agency programs and to individual employee performance and contributions. We agree that other federal agencies considering human capital reform may find FAA’s programs and experiences useful to consider, as FAA was granted human capital flexibilities in 1995 and has been working since to implement its human capital reform effort. In fact, we feel that this increases the importance of FAA’s efforts to effectively evaluate its reform. However, based on our prior work on human capital management, we found in our review that FAA’s efforts to link its human capital reform initiatives to its program goals do not go far enough to help the agency measure the reform’s success and that linkage between FAA’s personnel reform goals and the agency’s programmatic goals continues to be weakened by a lack of specific, quantitative, and time-based measures and goals. FAA also provided technical clarifications, which we included in the report where appropriate. We are sending copies of this report to the Administrator, Federal Aviation Administration. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3650. Key contacts and major contributors to this report are listed in appendix VII. To determine the human capital changes that FAA initiated after being granted broad flexibilities in 1995 and the extent to which these reform initiatives required exemptions from title 5, we reviewed federal personnel management requirements under title 5, agency documents identifying personnel reform initiatives, and reports by OPM on personnel management flexibilities already available under title 5. We also discussed the changes with officials from FAA’s Office of the Assistant Administrator for Human Resource Management and OPM. To determine the status of implementation of FAA’s personnel reform and factors that have affected reform implementation, we collected and analyzed internal and external evaluations—including those conducted by the Department of Transportation’s Office of Inspector General and NAPA—of different aspects of FAA’s personnel reform and the available data on the results. We also discussed the status of, and barriers to, implementation of personnel reform initiatives with FAA human resource management officials and representatives from the lines of business. To determine the views of FAA managers and employees on the effects of FAA’s personnel reform initiatives we conducted a series of structured interviews with 176 randomly selected FAA managers and employees. Our structured interview included questions about how the agency manages its employees, compensation and performance management, and labor and employee relations. We discussed the design of these questions with officials from FAA and with representatives from FAA’s five largest unions—the National Air Traffic Controllers Association (about 19,500 members), the Professional Airways Systems Specialists (about 11,600 members), the National Association of Air Traffic Specialists (about 2,300 members), the American Federation of State, County and Municipal Employees (about 2,000 members), and the American Federation of Government Employees (about 1,500 members). We then pre-tested the structured interview with managers and employees in FAA’s Southern Region and made appropriate revisions. To maximize our chances of obtaining the views of managers and employees across the different segments of FAA’s workforce, we applied a judgmental stratification to our random sample (therefore, it may not be representative of the actual composition of FAA’s workforce): 25 percent managers, and 75 percent employees; and 60 percent Air Traffic Services/air traffic control staff, 40 percent from the rest of FAA. In addition, for non-headquarters respondents, we selected 70 percent of our respondents from field facilities and 30 percent from regional offices. Our respondents were randomly selected from electronic lists of names provided by FAA. We conducted our structured interviews at FAA headquarters in Washington, D.C.; FAA’s Mike Monroney Aeronautical Center in Oklahoma City, Oklahoma; and field facilities and regional offices in six of FAA’s nine geographic regions, including offices in the immediate vicinity of Anchorage, Alaska; Atlanta, Georgia; Chicago, Illinois; Dallas, Texas; Los Angeles, California; and New York City, New York. Field facilities we visited included air traffic control towers, en route centers, automated flight service stations, terminal radar approach control centers, airports district offices, and flight standards district offices. A total of 176 FAA staff participated in our survey throughout the months of May, June, July, and August 2002. The information obtained through this survey pertains to only these 176 respondents and cannot be generalized to any other population. However, because we selected interview respondents at random, we have increased the chances of capturing the breadth of opinions across the agency. A copy of our structured interview and the summary results for our close-ended questions is provided in appendix II. To augment the views and opinions collected from the structured interviews, we also obtained the views of FAA senior managers or representatives of all five lines of business and representatives of employees’ associations. To determine the extent to which FAA management and employees’ views were supported by data, we examined the results from FAA’s employee attitude surveys conducted between 1997 and 2000, as well as other internal surveys of executives, managers, and supervisors related to various aspects of FAA’s personnel reform effort. In addition, we collected available data from FAA’s Office of Human Resource Management and Office of Civil Rights. To determine how FAA’s experiences compared with our findings from our human capital management work at other agencies, we reviewed our human capital management audit work that focused on federal agencies’ efforts to implement improvement initiatives and human capital flexibilities, as well as work conducted by other organizations involved in assessing federal agencies’ reform efforts including OPM and NAPA, and we compared our findings on FAA’s experiences with these findings. We conducted our work in accordance with generally accepted government auditing standards from November 2001 through October 2002. STRUCTURED INTERVIEW FORM AND SELECTED RESULTS ATC (A)/Non-ATC (NA): ________ Regional Office (RO)/Field Office (FO): ________ Manager or Supervisor (MS)/Employee (E): ________ Location (AT, AN, CH, DA, NY, LA)/ID Number:________ Thank you for meeting with us today. We work with the General Accounting Office in Washington DC and we’ve been asked by Congress to see how the personnel reforms at FAA are going. One of the ways we’re doing that is by asking FAA employees like you to tell us about how the personnel reforms here are affecting you, your ability to perform your work, and your unit’s ability to achieve its mission- in both good and bad ways. We’re going to several different regions and talking to many employees- we selected your name at random and appreciate your willingness to talk to us. I will be asking you a series of questions, some you can answer from a range of standard responses like strongly agree to strongly disagree, and others will give you an opportunity to provide a little bit more specific information. ____ will be taking notes to be sure that we capture all you have to say. Your responses are confidential- we won’t report your name with anything you say here and we’ll report our results as a summary of what everyone tells us. The entire interview should take about 30 minutes. Do you have any questions before we begin? (Personnel reform includes: hiring, training, compensation, performance management, labor and employee relations, among others.) 1. To start with, how knowledgeable are you about FAA’s personnel reform efforts? Which reform(s) do you know most about? Least about? The first set of questions I’ll ask deal with how FAA manages its workforce. For the next question and others throughout the survey, I’d like you to use this scale for your answer- please tell me if you strongly agree (SA), agree (A), disagree (D) or strongly disagree (SD). 2. I receive the training I need to do my job effectively. ____ 3. Could you give me more detail about that? How has the fact that the lines of business are responsible for funding and managing training affected the amount and/or quality of training you’ve received? 4. What, if any, training do you need that your line of business (or staff office) has not provided? 5. Using the scale again: The ability of my line of business (or staff office) to efficiently and effectively fill job vacancies has improved in the last 5 years. Office 6. Has the speed of hiring improved? How? {Pause for response} Has the quality of candidates improved? How? The next set of questions I’ll ask deal with the new pay and performance management systems including the new pay bands and pay-for-performance system, as well as, the “meets/does not meet standards” performance rating system implemented in 1996. 7. Using the scale again: I am better off under the new pay band system than under the grade and step pay system. 8. Using the scale again: I think the new pay system is fair to all employees. Non-ATC 9. What do you like about the new pay system? Is there anything you don’t like about the new pay system? 10. Using the scale again: Separating cash awards from performance appraisals has made the appraisal process more fair. 11. Using the scale again: The way my most recent formal performance appraisal was handled gave me useful information for improving my performance. Non-ATC 12. Using the scale again: Awards and recognition more appropriately reflect employees’ performance today than 5 years ago. 13. Using the scale again: FAA’s process for promotion better targets qualified people now than it did 5 years ago. Non-ATC 14. To what extent have managers and supervisors become more (or less) accountable for achieving agency goals in the last 5 years? The next set of questions I’ll ask deal with labor relations and employee relations. 15. Using the scale again: FAA’s employee unions have had a positive impact on implementing personnel reform in the agency. 16. Do you think labor-management relations have changed in the last 5 years? (As applicable) How? What specifically has driven this change? 17. Do you think employee morale has changed in the last 5 years? (As applicable) How? What specifically has driven this change? 18. Using the scale again: FAA employees had sufficient opportunities to provide input for personnel reform policies and initiatives before they were finalized and implemented. ____ 19. Using the scale again: I understand the goals of the Model Work Environment. 20. Please describe the effect Model Work Environment has had (if any) on employee morale. 21. Using the scale again: I have received sufficient and timely information on personnel reform changes that affect my job. ____ 22. What personnel reforms have been particularly well communicated? Which method(s) of communication work(s) best or would work best? Worst? The last set of questions deal with the overall result of personnel reforms at FAA. 23. Using the scale again: Personnel reform has made FAA a better place to work. 24. Using the scale again: Personnel reform will make FAA a better place to work. 25. What are the 3 most positive outcomes of FAA’s personnel reform efforts? 26. What are the 3 most negative outcomes of FAA’s personnel reform efforts? 27. What kinds of comments have you heard about the personnel reforms we’ve been discussing today from your co-workers? 28. Do you have any suggestions for improving FAA’s implementation of personnel reform? Okay, although we won’t be using your name with this information, I would like to ask just a couple of questions about your position here. 29. How long have you worked at FAA? AFGE AFSCME NAGE NAATS NATCA PAACE PASS Thank you very much for answering my questions today. We really appreciate your time, and the feedback we get from you will help FAA and Congress make future decisions about personnel reform. In March 2002, we issued a model for strategic human capital management that incorporates lessons learned in our reviews of other agencies’ human capital management practices, as well as our own experiences. The model identifies eight critical success factors and highlights some of the steps agencies can take to make progress in managing human capital strategically. These eight factors, shown in figure 16, are organized in pairs to correspond with the four governmentwide high-risk human capital challenges that our work has shown are undermining agency effectiveness. In November 2002, we issued a report that identified six key practices for federal agencies’ effective use of human capital flexibilities that incorporate the concepts and critical factors of our model. Based on our interviews with human resource management directors from across the federal government, we identified the following key practices that agencies should implement to use human capital flexibilities effectively, as shown in Figure 17. Blank cells indicate no old grade equivalent to new pay band for manager levels. (proposed data sources) Customer perceptions regarding human resource management office consultation & staff expertise (employee surveys) Customer perceptions regarding personnel reform Human resource management office & line of business human capital management efforts (employee surveys) Human resource management office spending (budget & accounting data) Knowledge transfer Improved line of business processes & practices Meet unique needs (human resources office reporting system) Human resource management office labor distribution (cost accounting system) (employee surveys) Consolidation in bargaining units Partnership Council meeting attendance (Personnel management evaluations, human resource management office policy training evaluations) Timeliness of automated/nonautomated selections Percentage of voluntary & involuntary attrition (data on grievances and unauthorized labor practices) (Selecting official interview data) Human Resource Management Office employee perceptions regarding communication Performance rewarded (employee surveys) (employee surveys) Identify and close skill gaps (skills/training assessment for human resources office) (skills/training assessment for Human Resource Management Office) (human resource management office information system audit) Timeliness and responsiveness to internal Human Resource Management Office requests (employee survey) (human resource management office reporting system) Percentage of personnel, compensation, and benefits funding spent on training (budget & accounting data) In addition to those individuals named above, William Doherty, Michele Fejfar, David Hooper, Jason Schwartz, E. Jerry Seigler, Margaret Skiba, Tina Smith, Alwynne Wilbur, and Kristy Williams made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
In 1996, the Federal Aviation Administration (FAA) undertook a human capital reform effort under one of the most flexible human capital management environments in the federal government, including broad exemptions from title 5 laws governing federal civilian personnel management. GAO was asked (1) to examine the changes FAA initiated in its reform effort, including whether they required an exemption from title 5 and their implementation status; (2) determine the effects of the reform effort according to available data and the views of FAA officials, managers, and employees; and (3) assess the extent to which FAA's reform effort incorporated elements that are important to effective human capital management. In 1996, FAA initiated human capital reform initiatives in three broad areas, some of which required exemption from title 5, and some of which have been fully implemented. FAA has not yet completed implementation of some key initiatives. For example, FAA's new compensation system remains unimplemented for about one-quarter of the agency's workforce--those staff whose unions have not reached agreements with FAA. FAA's need to implement initiatives among a workforce with a wide range of skills and to negotiate changes with multiple unions were among factors that affected the pace and extent of reform implementation. FAA had little data with which to assess the effects of its reform effort. While FAA human capital officials cited positive effects of FAA's reform effort, the views of managers and employees GAO interviewed were generally less positive. FAA's lack of empirical data on the effects of its human capital initiatives is one indication that it has not fully incorporated elements that are important to effective human capital management into its overall reform effort. These elements include data collection and analysis, performance goals and measures, and linkage of reform goals to program goals. FAA human resource management officials said that the agency should have spent more time to develop baseline data and performance measures before implementing the broad range of reforms but that establishing these elements was a complex and difficult task. FAA has also not gone far enough to establish linkage between reform goals and overall program goals of the organization. GAO found that the lack of these elements has been pointed out repeatedly in evaluations of FAA's human capital reform effort, but FAA has not developed specific steps and time frames by which these elements will be established and used for evaluation. Incorporation of these elements could also help FAA build accountability into its human capital management.
This section discusses DOE’s use of M&O and non-M&O contracts, cost- reimbursement contracts and cost-surveillance procedures, DOE headquarters and field office responsibilities for cost-surveillance and financial management policies and activities, leading practices for managing the risk of fraud and other improper payments, and data analytic tools and techniques to prevent and detect fraud. Since the Manhattan Project produced the first atomic bomb during World War II, DOE and its predecessor agencies have depended on the expertise of private firms, universities, and others with the scientific, manufacturing, and engineering expertise needed to carry out research and development work and manage the government-owned, contractor- operated facilities where the bulk of the department’s mission activities are carried out. DOE relies on contracts in general, and M&O contracts in particular, to do much of this work. The Federal Acquisition Regulation (FAR) authorizes DOE and other agencies with sufficient statutory authority and the need for contracts to manage and operate their facilities to use the M&O form of contract, but, according to DOE, it is the only agency using such contracts. An M&O contract is characterized both by its purpose and by the special relationship it creates between government and contractor. For example, the FAR recognizes that because of the nature of M&O contract work, or because it is to be performed in government facilities, the government must maintain special, close relationships with its M&O contractors and the contractors’ personnel in various important areas (e.g., safety, security, cost control, and site conditions). DOE’s use of M&O contracts has changed over time. Beginning in the 1990s, DOE undertook a detailed review of the then-existing M&O contracts to determine if the mission requirements remained appropriate for using such contracts. As a result of that review, DOE reduced the number of M&O contracts from approximately 52 to 29 and began using more non-M&O contracts, particularly for its environmental management activities and for some large capital asset construction projects. Although DOE uses fewer M&O contracts today than it did in the 1990s, they remain the primary contract form it uses, in terms of contract spending. In fiscal year 2015, for example, DOE had almost 6,700 non-M&O contracts and 22 M&O contracts. That year, DOE spent almost $19 billion on its M&O contracts—three-quarters of its total $25 billion in spending. Regardless of the contract form used—M&O or non-M&O—the majority of DOE’s contracts are cost-reimbursement contracts. Under cost- reimbursement contracts, the government primarily pays the contractor’s allowable costs incurred, rather than paying for the delivery of an end product or service; the government also pays a fee that is either fixed at the outset of the contract or adjustable based on objective or subjective performance criteria set out in the contract. This type of contract is considered high risk for the government because the primary risk of cost overruns is placed on the government. Cost-reimbursement contracts also require significantly more government oversight than do fixed-price contracts. For example, for cost-reimbursement contracts, the government must determine that the contractor’s accounting system is adequate for determining costs related to the contract and update this determination periodically. In addition, the government needs to monitor contractor costs—known as cost surveillance—to provide reasonable assurance that the contractor is using efficient methods and effective cost controls. By employing cost-surveillance procedures under cost-reimbursement contracts, the government can help ensure that the contractor is performing efficiently and effectively and that the government pays only for allowable, allocable, and reasonable costs applicable to the contract. As we reported in September 2009, federal agencies use a range of procedures for monitoring contractor cost controls. The procedures generally used by the civilian agencies we reviewed called for invoice reviews. Invoice reviews help to ensure that the goods and services for which the government is being billed were actually received, that the amounts billed are allowable, and that the government is not incurring costs that are inadequately supported. In addition, some agencies followed alternative procedures for monitoring costs and supplemented their cost monitoring with audits for the purpose of testing whether invoiced costs are allowable—known as incurred cost audits. The responsibility for establishing policies and performing cost- surveillance activities is split between DOE headquarters and field offices. The following DOE headquarters offices are responsible for establishing department-wide policies and guidance related to cost surveillance and financial management. DOE’s Office of Acquisition Management is responsible for establishing procurement-related policies and guidance. Among other things, the office is responsible for establishing cost-surveillance policies and guidance that help ensure that DOE pays only for allowable, allocable, and reasonable costs applicable to the contract. This includes updates to the DOE FAR Supplement (Department of Energy Acquisition Regulation), DOE Acquisition Letters, DOE procurement related Orders and Directives, and DOE’s Acquisition Guide. According to DOE Order 520.1A, DOE’s Office of the CFO is responsible for establishing, maintaining, and interpreting policy and general procedures for accounting and related reporting. In addition, the Office of the CFO is responsible for establishing policies and guidance for assessing DOE’s internal controls over contractor payments and assessing the risk of fraud and improper payments. Procurement and financial management components at DOE’s field offices are responsible for overseeing DOE contractors to include carrying out cost-surveillance and financial management activities. These include the following officials: DOE contracting officers are responsible for, among other things, determining the allowability of costs incurred by contractors under cost-reimbursement contracts. They are also responsible for ensuring that contract invoices are properly reviewed and analyzed before payment. In exercising this responsibility, a contracting officer may designate other qualified personnel to be the contracting officer’s representative for the purpose of performing certain technical functions in administering a contract, including conducting invoice reviews. DOE’s field office CFOs, in cooperation with DOE contracting officers and other field office staff, are responsible for overseeing contactor costs and conducting other financial management activities, such as internal control and improper payment risk assessments. For example, invoice reviews require close coordination among the contracting officer, contracting officer’s representatives, and the field CFO. In September 2014, we issued revised federal internal control standards that went into effect at the start of fiscal year 2016. These revised standards, along with our Fraud Risk Framework, OMB guidance, and the Fraud Reduction and Data Analytics Act of 2015 have placed an increased focus on the need for federal program managers to take a strategic approach to managing improper payments and risks, including fraud risk. Our Fraud Risk Framework provides comprehensive guidance for conducting fraud risk assessments and using the results as part of the development of a robust antifraud strategy. It also describes concepts and leading practices for establishing an organizational structure and culture that are conducive to fraud risk management, designing and implementing controls to prevent and detect potential fraud, and monitoring and evaluating fraud risk management activities. The leading practices described in the Fraud Risk Framework are meant to provide additional guidance for implementing requirements contained in federal internal control standards and OMB circulars. Our Fraud Risk Framework also states that practices in the Framework are not necessarily meant to be sequential or interpreted as a step-by-step process. The Fraud Risk Framework consists of the following four components: Commit. Commit to combating fraud by creating an organizational culture and structure conducive to fraud risk management. Assess. Plan regular fraud risk assessments and assess risks to determine a fraud risk profile. Design and implement. Design and implement a strategy with specific control activities to mitigate assessed fraud risks and collaborate to help ensure effective implementation. Evaluate and adapt. Evaluate outcomes using a risk-based approach and adapt activities to improve fraud risk management. Each component includes overarching fraud risk management concepts and leading practices for carrying out the concepts. These concepts include creating a structure with a dedicated entity to lead fraud risk management activities; conducting regular fraud risk assessments that are tailored to the program to determine the program’s fraud risk profile; design and implement a strategy to mitigate assessed fraud risks; and designing and implementing specific control activities, such as data analytic activities, to prevent and detect fraud. Leading practices for carrying out the concepts include: Designated antifraud entity. A designated entity to design and oversee fraud risk management activities serves as the repository of knowledge on fraud risks and controls, manages the fraud risk assessment process, and leads or assists with training and other fraud awareness activities. The dedicated entity could be an individual or a team, depending on the needs of the agency. Tailored fraud risk assessments and profiles. An effective antifraud entity tailors the approach for carrying out fraud risk assessments to the program. More specifically, antifraud entities that effectively plan fraud risk assessments identify specific tools, methods, and sources for gathering information about fraud risks. This information includes data on fraud schemes and trends from monitoring and detection activities. This approach allows the agency to develop a fraud risk profile that fully considers the specific fraud risks the agency or program faces, analyze the potential likelihood and impact of fraud schemes, and then ultimately document prioritized fraud risks. Develop and document an antifraud strategy. Managers who effectively manage fraud risks develop and document an antifraud strategy that describes the program’s activities for preventing, detecting, and responding to fraud. Data analytic activities. Data analytic activities can include a variety of techniques. For example, data matching and data mining techniques can enable programs to identify potential fraud or improper payments that have already been awarded, thus assisting programs in recovering these dollars, and predictive analytics can identify potential fraud before payments are made. Fraud awareness initiatives. Increasing managers’ and employees’ awareness of potential fraud schemes through training and education can serve a preventive purpose by helping to create a culture of integrity and compliance within the program. Further, increasing fraud awareness can enable managers and employees to better detect potential fraud. In addition, increasing fraud awareness through training and education of external stakeholders, such as contractors, can help prevent and deter fraud. Our Fraud Risk Framework incorporates long-standing industry practices related to the use of data analytics. In addition to the information included in the Fraud Risk Framework, the Institute of Internal Auditors, the American Institute of Certified Public Accountants, and the Association of Certified Fraud Examiners have issued practice guides and other materials that explain how data analytics can be used to help manage fraud risk. Selected information from these guides is discussed below. According to the Institute of Internal Auditors, data analytics enables an organization to analyze transactional data to obtain insights into the operating effectiveness of internal controls and to identify indicators of improper cost charges, fraud risk or actual fraudulent activities. In addition, because automated checks are less labor-intensive than traditional control mechanisms, such as manual checks, automating data analytic tests can allow managers to monitor large amounts of data more efficiently. Data analytics is used to identify activities or transactions that deviate from expected patterns. It can be used, for example, to review payroll records for fictitious employees or accounts payable transactions for duplicate invoices. The tools and techniques used may vary and range from simple data mining techniques, such as sorting and filtering, to using sophisticated algorithms to analyze multiple data sets. Examples of the type of data analytic tests that can be performed include the following. Calculation of statistical parameters (e.g., averages, standard deviations, highest and lowest values)—to identify outlying transactions that could be indicative of fraudulent activity. Classification—to find patterns and associations among groups of data elements. Stratification of numeric values—to identify unusual (i.e., excessively high or low) values. Joining different data sources—to identify inappropriately matching values, such as names, addresses, and account numbers in disparate systems. Duplicate testing—to identify simple and/or complex duplications of business transactions, such as payments, payroll, claims, or expense report line items. Gap testing—to identify missing numbers in sequential data. Validating data entry dates—to identify postings or data entry times that are inappropriate or suspicious. According to the Institute of Internal Auditors, for fraud detection data analytics programs to be effective, the fraud detection techniques listed above must be performed against full data populations. Although the use of sampling data is a valid and effective audit approach, it is not necessarily appropriate for fraud detection purposes. When only partial data are analyzed, it is likely that a number of control breaches and suspicious transactions will be missed, the impact of control failures may not be quantified fully, and smaller anomalies may be overlooked. It is often these small anomalies that point to weaknesses that can be exploited, causing a material breach. Analyzing the data against full data populations provides a more complete picture of potential anomalies. Random sampling is most effective for identifying problems that are relatively consistent throughout the data population; fraudulent transactions, by nature, do not occur randomly. DOE uses prepayment invoice reviews to monitor the costs of non-M&O contracts but has shortcomings in its control activities at the six site offices that oversee them, as well as resource challenges that limit the effectiveness of these reviews. DOE uses post payment incurred cost audits to detect fraud and other improper payments for both its M&O and non-M&O contracts, but resource constraints and other challenges limit the audits’ effectiveness. DOE uses prepayment invoice reviews to monitor non-M&O contract costs. Under such reviews, non-M&O contractors submit invoices to DOE for items delivered or services performed before the contractors receive payment. Invoice reviews help to ensure that the goods and services for which the government is being billed were actually received, the amounts billed are allowable, and the government is not incurring claimed costs that are inadequately supported. DOE contracting officers are responsible for ensuring that contract invoices are properly reviewed and analyzed prior to payment. In exercising this responsibility, contracting officers may delegate invoice review and analysis functions to other government personnel, such as technical and financial representatives. For the six DOE sites that oversee non-M&O contractors, invoice reviews generally included a technical review—to ensure that the costs billed were for services performed or goods delivered—and a financial review—to ensure that the costs billed conformed with the terms of the contract. However, the control activities at the six site offices that oversee non- M&O contracts have limitations. Specifically, DOE does not have a department-wide invoice review policy or well-documented procedures at most of these site offices, and DOE officials face challenges in reviewing invoices prior to payment. According to federal internal control standards, management should implement control activities through policies. However, officials with the Office of the CFO at DOE headquarters told us that DOE does not have department-wide invoice review policies and procedures. Instead, according to these officials, field CFOs and contracting officials are responsible for developing appropriate invoice review policies and procedures. Headquarters CFO officials said that they provide tools and guidance to field CFOs for things such as assessing internal controls and contractors’ accounting and purchasing systems, but they do not prescribe or assess payment procedures at DOE field offices. Similarly, DOE’s Office of Acquisition Management has issued invoice review guidance but does not prescribe specific policies and procedures. Specifically, DOE’s Acquisition Guide contains a chapter on contract financing that discusses reviewing and approving invoices. The guide states, for example, that prior to payment the responsible approving official must, among other things, ensure that all invoiced costs are allowable and allocable to the contract, items or services included on previously paid invoices are not also included on the current invoice, labor hours are billed at appropriate rates, and all other direct costs have been properly substantiated and are consistent with the requirements in the contract. According to DOE’s Acquisition Guide, however, these are general guiding principles for approving officials to consider when reviewing and analyzing cost elements included in contract invoices; they are not intended to repeat or conflict with local procedures. Unlike other chapters of the guide that contain relevant internal standard operating procedures to be followed by both procurement and program personnel, the invoice review and approval discussion is not considered an operating procedure, according to DOE’s Acquisition Guide. Moreover, our analysis of the invoice review and approval discussion contained in DOE’s Acquisition Guide found that it does not contain the detail necessary to serve as an operating procedure. We have reported previously on DOE’s invoice review policies and procedures at one of DOE’s largest clean-up sites. Specifically, in July 2007 we found that DOE’s Hanford Office was not adequately reviewing invoices for a multibillion-dollar cost-reimbursement contract to design and construct the Hanford Waste Treatment Plant, risking hundreds of millions of dollars in improper payments. Instead, DOE relied primarily on the Defense Contract Audit Agency (DCAA), an independent third party that has traditionally been the primary auditor for non-M&O contracts, to review and approve the contractor’s financial systems and relied on the contractor’s review and approval of subcontractor charges. DOE’s heavy reliance on others, with little oversight of its own, exposed the hundreds of millions of dollars it spent annually on the Waste Treatment Plant to an unnecessarily high risk of improper payments. Our July 2007 report recommended, among other things, that DOE perform an assessment of the risks associated with contract payments and establish appropriate policies and procedures for effective review and approval of the prime contractor’s invoices related to the Hanford Waste Treatment Plant. DOE agreed with the recommendation, and in 2007 the Hanford Office conducted a risk assessment and developed a revised invoice review policy that applies to contractor invoices that it reviews. However, it is not a department-wide policy. In the absence of DOE-wide policy and procedures, the six sites reported following different procedures. As discussed previously, invoice reviews generally include a technical review—to ensure that the costs billed were for services performed or goods delivered—and a financial review—to ensure that the costs billed conform to the terms of the contract. On the basis of questionnaire responses and documents provided by each of the six sites responsible for reviewing non-M&O contractor invoices, we determined that the procedures sites used for the technical and financial reviews varied—with some sites reporting that they used locally developed, site-specific procedures and others reporting that they relied on the general guidance provided in DOE’s Acquisition Guide (see table 1). In addition, on the basis of our review of site office policies and procedures, we determined that five of the six sites did not have well- documented policies or procedures. According to federal internal control standards, management should implement control activities through policies and document them in the appropriate level of detail to allow management to effectively monitor the control activity. Federal internal control standards also state that effective documentation assists in management’s design of internal control by establishing and communicating the who, what, when, where, and why of internal control execution to personnel. However, only one site—the Hanford Office—had detailed, well-documented operating procedures. The invoice review procedures for this site, for example, specified the number of transactions to be reviewed and included step-by-step instructions for selecting the transactions and the transactions’ component items to be reviewed and verified. None of the other sites’ local procedures contained detailed instructions for conducting the reviews. That is, they did not contain the who, what, when, where, and why of internal control execution. Instead, the procedures included general statements such as “the financial reviewer is to perform the necessary financial responsibilities in determining the adequacy of contractor cost invoices” or “the level of review should be based on risk as determined by risk assessment” but did not provide any specific detail or steps on how to perform the reviews. Moreover, several sites referenced DOE’s Acquisition Guide as their invoice review policy or procedure. However, as discussed above, DOE’s Acquisition Guide does not contain the details necessary to be an operating procedure. Without a DOE-wide invoice review policy that requires sites to establish well-documented invoice review operating procedures, DOE management has no assurance that the six offices are effectively conducting invoice reviews or that this control activity is operating as intended. Regarding the capacity and time officials have to devote to oversight duties, including invoice reviews, DOE faces significant challenges. According to a 2013 DOE Acquisition Workforce Study commissioned by DOE, insufficient capacity to properly administer contracts raises the risk of fraud, waste, and abuse, which could result in extra cost and delay. The core challenge facing DOE’s acquisition community, according to the study, is the pervasive lack of sufficient staffing in the majority of DOE field procurement offices. As we have reported previously, having the capacity to perform contractor oversight duties is an important criterion for demonstrating progress toward addressing DOE’s contract and project management challenges—an area we have designated as at high risk for fraud, waste, and abuse. Because contracting officers and their delegates play an important role in ensuring that the government makes payments to contractors only for goods and services received and accepted pursuant to contractual terms, these challenges also impact DOE’s ability to properly review contractor invoices. DOE’s ability to perform comprehensive invoice reviews is also limited by the large number of transactions associated with individual invoices and the limited amount of time DOE has to submit payment after receipt of an invoice. For example, the contractor responsible for the design and construction of the Hanford Waste Treatment Plant submits biweekly invoices for $20 million or more that average over 10,000 transactions each. Upon receipt of the contractor’s invoice, according to the terms of the contract, DOE has 10 business days to submit its payment. Consequently, officials responsible for performing invoice reviews may not be able to determine, prior to payment, if the amounts billed to DOE are allowable. For example, a reviewing official at the Hanford site included a disclaimer on invoices he reviewed, stating that “the appropriateness of the invoiced costs could not be determined in the time allotted.” Given the time constraints associated with prepayment review of invoices, DOE’s Hanford Office also selectively performs post payment invoice review; it is the only one of the six site offices to do so. Specifically, DOE’s Hanford Office selects a non-statistical sample of between 75 and 100 invoiced transactions to review on a quarterly basis after the invoices have been paid, according to the site’s local procedures. Officials from the Hanford Office told us that the items sampled are selected based on risk and that risk is determined based on a variety of factors, such as the results of internal and external audits. Using this approach, the Hanford Office was able to select and review less than 1 percent of the contractor’s costs for fiscal years 2013 through 2015. DOE disallowed a total of $9,078 of the contractor’s invoiced costs as a result of these reviews. For both its M&O and non-M&O contracts, DOE uses post payment incurred cost audits to detect fraud and other improper payments. However, resource constraints limit the effectiveness of these audits. For non-M&O contracts, DOE relies on DCAA to perform audits of contractors’ invoiced costs. However, resource issues and a backlog of audits at DCAA have resulted in audit delays. According to a 2015 DOE OIG report, some of DOE’s non-M&O contracts have not been audited in over 8 years. To try to address the DCAA audit backlog, DOE has used independent public accounting firms, expanded internal audit functions, and relied more heavily on invoice reviews and OIG audits and assessments. However, DOE’s OIG reported that these methods have not been completely effective and do not meet audit standards in some cases. For the 22 M&O contracts DOE had in fiscal year 2015, which accounted for about 75 percent of DOE’s spending, DOE did not perform post payment reviews of contractor costs. We reported in August 2016 that DOE officials told us they were able to monitor the appropriateness of M&O contractors’ withdrawal of funds in near real time. DOE officials said that this was possible because M&O contractors are required to integrate their accounting systems with DOE’s accounts each month, which provides DOE with visibility into contractor accounts. However, with the exception of monitoring aggregate spending to ensure that costs do not exceed budgetary limits, DOE policies and procedures do not require that sites monitor M&O contractor withdrawals to determine the appropriateness of costs incurred by the contractor. Specifically, none of the cost-surveillance policies, procedures, or guidance used by DOE sites discusses real-time monitoring of contractor withdrawals. Moreover, there are logistical issues at some sites that make it unlikely that such monitoring is occurring on a routine basis. According to DOE officials, not all sites have direct access to or visibility into M&O contractors’ systems. For example, to monitor withdrawals, DOE officials at one site said that they would need to gain access to the contractor’s system by traveling to the contractor’s site to obtain information about specific cost transactions. In addition, DOE does not require M&O contractors to submit invoices before receiving payment and instead requires a “payments cleared financing arrangement,” which is the authority for contractors to draw funds directly from federal accounts to pay for contract performance. Under this arrangement, DOE does not use prepayment reviews to determine the appropriateness of M&O contract costs. Moreover, for its M&O contracts, DOE does not use an independent third party to audit contractors’ costs and ensure that invoiced costs are allowable under the contract. Instead, incurred cost audits are performed by the M&O contractors’ internal audit staff under a process known as the “cooperative audit strategy.” Specifically, the M&O contractors’ internal audit organization is responsible for performing operational and financial audits, assessing the adequacy of management control systems, and conducting an audit of the M&O contractors’ incurred cost statements. In addition, M&O contractors are required to conduct or arrange for audits of their subcontractors when subcontracts are structured as cost reimbursement-type contracts, including time and materials and cost reimbursable subcontracts. According to the OIG’s audit manual, under the cooperative audit strategy, each year DOE’s OIG performs an assessment of incurred cost statements for the 10 M&O contractors that incurred and claimed the most costs that year. For the remaining M&O contractors, the OIG performs assessments based on risk. If not considered high-risk, the OIG assesses the contractor at least once every four years. The OIG assessments, however, do not represent independent third-party audits. Although the OIG is an independent third party, according to the DOE OIG audit manual, cost statement work under the cooperative audit strategy is not an audit but instead follows standards for review-level engagements, which are substantially less broad in scope. According to the OIG, the framework of the Cooperative Audit Strategy ensures the integrity and reliability of the review-level engagements by confirming the independence of the M&O internal audit organizations and through various oversite procedures. We did not perform work to substantiate the effectiveness of the OIG’s oversight procedures. DOE’s OIG has reported on the following challenges that impact the effectiveness of both M&O contractor cost audits and subcontractor audits. Regarding M&O contractor cost audits, a 2015 DOE OIG report noted delays in completing audits, and, in some cases, audits that did not comply with professional audit standards. For example, as of the end of fiscal year 2014, there were more than 22 open M&O contractor cost audits with a total of $1.1 billion in unresolved questioned contractor costs. Regarding subcontract audits, from 2010 to 2012, subcontracts valued in excess of $906 million had not been audited or were reviewed in a manner that did not meet audit standards, according to a 2013 OIG report. According to the report, the subcontract costs were not audited because the department did not ensure that its M&O contractors developed and implemented procedures to meet their contractual requirements. As discussed previously, the Fraud Reduction and Data Analytics Act of 2015 establishes requirements aimed at improving federal agencies’ controls and procedures for assessing and mitigating fraud risks and capabilities to identify, prevent, and respond to fraud, including improper payments, through the development and use of data analytics. Implementation of these requirements could help mitigate some of the resource challenges DOE is currently facing in overseeing payments to its contractors. DOE officials told us they plan to meet all requirements for managing the risk of fraud and improper payments, which should include requirements of the Fraud Reduction and Data Analytics Act of 2015. DOE has not used leading practices in its approach to managing its risk of fraud and other improper payments. In particular, DOE has not (1) created a structure with a dedicated entity to lead fraud risk management activities; (2) conducted fraud risk assessments that are tailored to its programs in order to develop a fraud risk profile; (3) documented a strategy to mitigate assessed fraud risks; or (4) designed and implemented specific control activities, such as data analytics, to prevent and detect fraud and other improper payments. The Fraud Reduction and Data Analytics Act of 2015, which Congress passed in June 2016, establishes requirements aimed at improving federal agencies’ controls and procedures for assessing and mitigating fraud risks and directs OMB to establish implementation guidelines that incorporate the leading practices identified in the Fraud Risk Framework. We compared the following leading practices in the standards and guidance of the Institute of Internal Auditors and our Fraud Risk Framework with DOE’s policies and procedures. Dedicated entity to manage fraud risk. A leading practice for managing fraud risk and demonstrating management’s commitment to combating fraud is to designate an entity to design and oversee fraud risk management activities. DOE has not created a structure with a dedicated antifraud entity to lead fraud risk management activities. In August 2015, DOE established its first Chief Risk Officer to advance department-wide approaches to enterprise risk management, which may include fraud risk management. However, the Chief Risk Officer has a broad focus on general risks to the department, and the specific responsibilities of the position have yet to be defined. As a result, it is not clear whether the position will include leading practices related to an antifraud entity’s responsibilities, such as serving as the repository of knowledge on fraud risks and controls, managing the fraud risk assessment process, and leading or assisting with training and other fraud awareness activities. Fraud risk assessments and profile. According to our Fraud Risk Framework, an effective antifraud entity tailors the approach for carrying out regular fraud risk assessments to its programs. This allows the agency to develop a fraud risk profile that fully considers the specific fraud risks the agency or program faces, analyze the potential likelihood and impact of fraud schemes, and then ultimately document prioritized fraud risks. DOE has not conducted fraud risk assessments that are tailored to its programs and that would allow the department to create a fraud risk profile, which is considered a leading practice for managing the risk of fraud. In March 2016, DOE revised its internal control evaluation guidance with the stated purpose of updating its focus on the identification of improper payment risks and fraud risks, among other things. According to DOE’s revised guidance, DOE updated its internal control assessment tools to allow its offices to identify and manage fraud risks. DOE provided us with a list of fraud risks that they had identified for fiscal year 2016 using the revised assessment tools. Examples of risks identified include statements such as “if costs are inaccurately reported, then mischarging could occur, impacting budgets and financial statements” and “if requisitions are not approved by the appropriate personnel, then inappropriate purchases may be made.” DOE’s approach was not tailored to DOE programs; instead, it provided all sites with the same list of potential risks. According to our Fraud Risk Framework, an effective antifraud entity tailors the approach for carrying out fraud risk assessments to the program. More specifically, antifraud entities that effectively plan fraud risk assessments identify specific tools, methods, and sources for gathering information about fraud risks. This information includes data on fraud schemes and trends from monitoring and detection activities. Because DOE’s approach to assessing its fraud risk is not tailored to its programs, DOE is not positioned to determine each program’s fraud risk profile. Strategy to mitigate fraud risk. Managers who effectively manage fraud risk, according to our Fraud Risk Framework, develop and document an antifraud strategy that describes the program’s approach for addressing the prioritized fraud risks identified during the fraud risk assessment. An effective antifraud strategy describes the program’s activities for preventing, detecting, and responding to fraud. DOE has not developed or documented a DOE-wide antifraud strategy or directed individual programs to develop program-specific strategies, according to DOE officials. As discussed previously, federal internal control standards require managers to design a response to analyzed risks. Managers should consider the likelihood and impact of the risks, as well as their defined risk tolerance. These are key elements of a program’s fraud risk profile. According our Fraud Risk Framework, effective managers of fraud risks use the program’s fraud risk profile to help decide how to allocate resources to respond to fraud risks. Specific control activities to prevent or detect fraud or improper payments. Managers who effectively manage fraud risks design and implement specific control activities, such as fraud awareness and data analytic activities, according to our Fraud Risk Framework. DOE has not designed and implemented specific control activities to prevent and detect fraud and other improper payments. Of the 10 field offices responsible for overseeing contractor costs, none required employees responsible for reviewing contractor costs to attend fraud awareness training. Moreover, DOE does not routinely use data analytic techniques. Data analytics is a type of control activity that can be effective in detecting fraudulent spending or other improper payments. Of the 10 field offices responsible for reviewing contractor costs, officials from 4 reported in their questionnaire responses that they employed data analytic techniques to help detect fraudulent or other improper costs in contractor invoices or charges. On the basis of the description of the specific data analytic methods they reported using, we determined that only one field office—the Hanford Office— had reported that it was performing analysis that could be considered data analytics. According to their response to our questionnaire, officials at the Hanford Office reported that they use data trending, risk matrixes, cost data graphing, and key word searches to look for anomalies. However, Hanford officials did not provide documentation to illustrate their use of these data analytic techniques as we had requested. In addition, the office’s invoice review procedures do not discuss the application of the data analytic techniques Hanford officials reported using. As a result, we could not substantiate the site’s reported use of data analytic techniques. We discuss the use of data analytics in more detail in the next section. According to DOE officials, they do not use leading practices for managing the department’s risk of fraud because they consider the risk of fraud to be low. DOE officials told us that, unlike other federal agencies, DOE is not at the highest risk for fraud and improper payments and therefore cannot be expected to commit the resources necessary to independently identify, evaluate, adapt, and implement private industry leading practices. According to DOE officials, “a lack of widespread implementation of private sector fraud prevention and detection leading practices at DOE is not indicative of a management failure to appropriately manage the risk of fraud.” These officials told us that DOE manages the risk of fraud and improper payments through its internal controls program; DOE OIG efforts to prevent, detect, and make recommendations related to fraud; and implementation of requirements of the Improper Payments Elimination and Recovery Act. DOE’s approach for managing its risk of fraud and improper payments, however, may not be sufficient. According to the DOE OIG’s Fiscal Year 2015 Performance Report and Fiscal Year 2016-2017 Performance Plan, the opportunity for fraud to occur or exist within various department programs is significant. Moreover, given that DOE has not conducted fraud risk assessments that are tailored to its programs, it is unclear how DOE officials reached the conclusion that the department’s risk of fraud is low. As discussed previously, the deceptive nature of fraud makes it difficult to measure in a reliable way. For example, the alleged fraudulent activity discussed previously, which involved contractors at DOE’s Hanford site and resulted in a $125 million settlement, was identified and reported by whistleblowers. It was not prevented or detected through any strategic fraud risk management effort on DOE’s part. In the absence of such a framework, DOE has little assurance that the types of conduct reported by these whistleblowers are not widespread. The leading practices contained in our Fraud Risk Framework are designed to help federal program managers take a more strategic approach to assessing and managing fraud risks. Although our Fraud Risk Framework may be new to DOE and other federal agencies, many of the leading practices contained in it are based on long-standing industry practices. Other frameworks and guides related to fraud risk management and integrity have existed for some time, including publications by the Institute of Internal Auditors, American Institute of Certified Public Accountants, and Association of Certified Fraud Examiners, as well as the Australian National Audit Office, the Committee of Sponsoring Organizations of the Treadway Commission, and the Organisation for Economic Co-operation and Development. The Fraud Risk Framework allows for flexibility in how these leading practices are implemented. Effectively mitigating fraud risks by adopting these leading practices can help DOE to meet its mission by helping to ensure that funds are used only for approved purposes. DOE officials told us that they plan to meet the requirements of the Fraud Reduction and Data Analytics Act of 2015 but should not be expected to implement private industry leading practices prior to the issuance of OMB guidance. Without implementing these selected leading practices for managing its risk of fraud, DOE is missing an opportunity to better position itself to meet the requirements of the Fraud Reduction and Data Analytics Act of 2015 and to organize and focus its resources in a way that would allow the department to mitigate the likelihood and impact of fraud. Without a dedicated entity within DOE to design and oversee fraud risk management activities, DOE is missing an opportunity to create a structure that is more conducive to fraud risk management. Without tailored risk assessments that result in an accurate fraud risk profile, DOE is not equipped to understand its fraud risk and take steps to mitigate it. Because DOE has not developed and documented an antifraud strategy that describes its programs’ approaches for addressing fraud risks, DOE is missing an opportunity to allocate resources more effectively to respond to fraud risks. Because DOE has not designed and implemented specific control activities, such as fraud awareness training and data analytics, it does not have assurance that its managers and employees are fully aware of potential fraud schemes. Such awareness can enable managers and employees to better detect potential fraud. Moreover, DOE is missing an opportunity to allow managers to monitor large amounts of data more efficiently. Finally, because DOE has not employed data analytics, and therefore has not benefitted from the experience of designing, implementing, and improving its analytic procedures, the department is not well positioned to implement the requirements of the Fraud Reduction and Data Analytics Act of 2015. In applying data analytics to identify potential indicators of fraud or other improper payments associated with selected DOE contracts, we found that much of the cost data we requested from two DOE contractors for the purpose of performing data analytics was not suitable for analysis. The data were not suitable either because they were not for a complete universe of transactions that was reconcilable with amounts billed to DOE or because they were not sufficiently detailed. Sufficiently detailed data include identifiers such as transaction date, dollar amount, item or service description, and transaction codes to indicate the type of cost represented (e.g., construction materials, property lease, and office supplies). However, for those subsets of DOE contractor data that were complete and sufficiently detailed, we were able to apply data analytics, and we identified potential indicators of improper charges that could be used to guide further investigation of these charges. Much of the transaction-level cost data for fiscal years 2013 through 2015 that we requested from one M&O contractor and one non-M&O contractor were not suitable for use with data analytic techniques. We requested data from two contractors—the M&O contractor that operates Sandia National Laboratories and the non-M&O contractor responsible for the design and construction of the Hanford Waste Treatment Plant. The M&O contractor at Sandia, however, was unable to produce a full data population of sufficiently detailed transaction-level data for any of the over $8 billion in costs it incurred and claimed during the 3-year time frame we examined (see fig. 1). More specifically, the contractor was unable to provide data files that could be used to compile a data set in which the total of all cost transactions could be reconciled with the total amount paid by DOE. According to representatives of the M&O contractor and documents they provided, the contractor’s core accounting system generates financial information for both internal and external use through the use of project accounting and general ledger modules. Specifically, the contractor’s project accounting module generates information for internal management use, and the general ledger module generates information for external reporting purposes. However, neither the project accounting nor the general ledger module contains transaction-level cost data suitable for data analytics (see app. II for more detail on issues with the data provided by the M&O contractor). Having a data set that reconciles with the amount charged to the government is important because it ensures that the data set represents a complete universe of cost transactions. Regarding the non-M&O contractor, we requested and received a data set of cost transactions for the nearly $1.8 billion it charged DOE over the 3-year period. Of the nearly $1.8 billion in costs, $1.342 billion were sufficiently detailed for the purpose of employing data analytics (see fig. 1). However, about $437 million in subcontractor costs were not sufficiently detailed. Payments to subcontractors accounted for almost 25 percent of all expenses billed by the non-M&O contractor to DOE for this period, but these transactions did not contain specific information regarding the types of services or materials purchased from the subcontractor. Without detailed cost data for the entire population of subcontractor-related costs, analyses of these costs were not possible. According to DOE officials, they review most types of costs as part of their quarterly post payment invoice review process. However, our analysis of all costs DOE sampled and tested from fiscal year 2013 through 2015 found that DOE sampled about 1 percent (50 transactions totaling $3.7 million) of the nearly $437 million in subcontractor-related costs. As discussed previously, fraudulent transactions, by nature, do not occur randomly and, therefore, are not effectively identified through sampling. When only partial data are tested, it is likely that a number of control breaches and suspicious transactions will be missed, the impact of control failures may not be quantified fully, and smaller anomalies may be overlooked. It is often these small anomalies that point to weaknesses that can be exploited, causing a material breach. Of the nearly $10 billion of costs these two contractors incurred during fiscal years 2013 through 2015, only $1.3 billion was suitable for analysis using data analytic techniques. (See fig. 1.) DOE has not required that these contractors maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to the government. Under federal internal control standards, managers should use quality information to achieve the entity’s objectives. To do this, managers may identify information requirements, obtain relevant data from reliable internal and external sources, and process data into information that is appropriate, current, complete, accurate, accessible, and provided on a timely basis. In addition, as discussed previously, the Fraud Reduction and Data Analytics Act of 2015 established new requirements aimed at mitigating fraud risk through the development and use of data analytics, among other things. Without requiring contractors to maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to the government, DOE will not be well positioned to meet the requirements of the Fraud Reduction and Data Analytics Act of 2015. Using simple analytic techniques (such as sorting and classifying), we reviewed costs charged to DOE for fiscal years 2013 through 2015 by the non-M&O contractor and identified indicators of potential improper cost charging that could be useful to guide further investigation of these charges. The purpose of employing data analytics was to identify costs that appeared unusual or out of the ordinary. Unusual costs are not necessarily fraudulent or improper but instead serve as red flags or possible indicators of improper cost charging that may warrant further review. Using data analytics, we identified unusual costs that we believe warrant further review by DOE. Examples of the costs we identified include the following. Relocation and temporary assignment costs. We identified employee permanent relocation and temporary assignment costs of nearly $26 million for the 3-year period we examined. In total, these costs were spread across 16 different cost codes in the other direct cost and labor files and seemed high. Furthermore, in reviewing the cost transactions associated with these 16 different cost codes, we identified a subset of transactions totaling $7.8 million that were unusual because they appeared to be per-diem payments but were not directly tied to an individual employee—an attribute normally associated with per-diem payments. Specifically, the transactions we identified were weekly lump-sum payments—averaging about $50,000 weekly—that were coded “temporary assignment per-diem paid by payroll.” None of the transactions contained information necessary to link them to the individual employees receiving payment. We also identified other transactions totaling over $2.5 million that were unusual because they did not appear to be reimbursements to employees for relocation expenses, but instead appeared to be relocation bonuses. For example, 68 payments of $25,600 each (total about $1.741 million) and 34 payments of $19,300 each (total about $656,000) were made to individual, named employees. Each transaction was connected to a permanent relocation and temporary assignment cost code and contained the cost description “miscellaneous other payments or reimbursements.” Christmas Day purchases. We identified four purchases of varying amounts ($400, $137, $81, and $11) totaling over $600 that were made from Amazon, an online retailer, on Christmas Day. There may be a valid reason for purchases that occur on a holiday, but in general, holiday purchases are considered red flags and should be scrutinized. Payments to an affiliate. We identified 455 affiliated subcontractor transactions totaling over $6.8 million. Specifically, these transactions reflect costs charged to DOE for services provided to the non-M&O prime contractor by a subcontractor that was affiliated with the prime contractor. The subcontractor, according to its website, is responsible for, among other things, monitoring supplier quality and on-time delivery for the prime contractor’s projects. Given the affiliation between the prime and subcontractor, additional scrutiny may be needed to ensure that goods and services provided by the subcontractor affiliate are competitively priced. Labor costs. We identified nearly 10,000 transactions totaling over $241 million in payroll costs that were included in the “other direct costs” data file instead of the labor cost data file. These transactions did not contain an earnings code (a code that indicates the type of cost, such as “straight time,” “overtime,” or other type of labor expense) that is typically assigned to labor costs. In July 2016, we provided the results of our analysis to DOE Hanford site officials, and in August 2016 we provided additional detailed information on our methodology to allow them to replicate our analysis. DOE Hanford site officials initially declined to respond to our questions about the results of our analysis. However, in December 2016, they provided a written response to our November 30, 2016, request to confirm facts about the data in which they disagreed with our observations and analysis and provided explanations for the cost charges we identified. Their specific explanations follow. Regarding relocation and temporary assignment costs, according to the Hanford site’s written response, appendix A of the Hanford Site Stabilization Agreement establishes daily travel pay rates for construction employees. The lump-sum per-diem payments totaling $7.8 million were “travel-to-the-site payments,” which are authorized by the Hanford Site Stabilization Agreement and are charged to DOE in lump-sum amounts because charging for individual (or daily) trips for hundreds of workers would be too onerous and inefficient. The transactions totaling over $2.5 million, which were identified as “miscellaneous other payments or reimbursements” were “living- away-from-home-option” costs, which are consistent with the Advance Understanding on Costs agreement that DOE has with the contractor. The site’s written response also states that Hanford officials expected relocation and temporary assignment costs to be significant because the Advance Understanding on Costs agreement authorizes these significant costs. As evidence that these charges were appropriate, Hanford officials provided us with a copy of the Advance Understanding on Costs agreement and the Hanford Site Stabilization Agreement. Hanford officials did not provide other documentation to support the appropriateness of these charges, and it is unclear how the site can substantiate per-diem payments if they are not associated with individual employees. Regarding the Christmas day purchases on Amazon, according to the written response, the contractor’s accounting software, which uses batch processing, generates transaction posting dates that may appear to be on a holiday when in fact the purchases were made before the holiday. For example, the transaction date for the Amazon purchases we identified was December 25, but these purchases were actually made on October 29 and November 4, according to the documentation DOE provided. However, DOE did not provide information regarding how it might isolate holiday purchases, given that transaction dates in the contractor’s system did not necessarily reflect the date of purchase. On the basis of our review of the site’s invoice review procedures, the Hanford Office does not specifically target for review transactions that fall on or around holidays. Moreover, if the contractor’s use of batch processing overrides the transaction date of a purchase, it is unclear how DOE can reliably determine the validity of costs charged to the government. Regarding each of the cost categories we identified, according to its written response, the Hanford Office has reviewed each of “these type” of expenses as part of its post payment invoice review process and found them to be proper. However, our review of all the transactions the Hanford Office reported sampling and reviewing for fiscal years 2013 through 2015 found that the Hanford Office had reviewed very few of the transactions we identified through our use of data analytics. Specifically, as part of its regular selective invoice reviews, the Hanford Office reviewed 4 relocation and temporary assignment transactions identified as “miscellaneous other payments or reimbursements,” 4 of the subcontractor affiliated transactions, and 1 of the nearly 10,000 payroll cost transactions that were included in the “other direct costs” data file instead of the “labor” file. The Hanford Office did not review any of the $7.8 million in lump-sum per-diem payments or the Christmas Day purchases on Amazon. In addition to the costs we identified above, we had initially identified $2 million in costs for equipment depreciation expenses billed to DOE that we thought were unusual until DOE officials provided us with information that clarified our understanding of the contractor’s data. Specifically, DOE officials explained that the entire description of the account we were examining was “depreciation or purchase” and that, in response to our observations, the Hanford office reviewed two transactions from this account and found that they were purchases and not depreciation. Although DOE’s clarification resolved our initial reason for flagging these costs, the new information raised other questions regarding the use of a single cost code to track dissimilar costs. The FAR requires that costs be allowable, reasonable, and allocable to the contract. Unless contractor costs are submitted in a manner that allows DOE to distinguish between depreciation expenses and purchases without having to review every cost submitted under a single cost code, it is unclear how DOE can ensure that costs are allowable and allocable. In addition, the contractor’s use of a single cost code to track dissimilar costs undermines DOE’s ability to identify potentially improper cost charges using data analytics. Data analytics, as discussed previously, enable an organization to analyze transactional data to obtain insights into the operating effectiveness of internal controls and to identify improper cost charges, indicators of fraud, or actual fraudulent activities. Because automated checks are less labor-intensive than traditional control mechanisms, such as manual checks, automating data analytic tests can allow managers to monitor large amounts of data more efficiently. Regarding the usefulness of performing data analytics, DOE officials told us that a data analytic analysis would not be cost-effective because it produced too many false positives—that is, unusual transactions that are later determined to be legitimate. In addition, they said that until recently there was no requirement to perform data analytics and, because it has not been required, they have not devoted the time or manpower to developing and implementing data analytic tools and techniques. DOE officials said that they agreed that our review may have helped identify how the use of data analytics can be expanded at DOE but said that performing data analytics would require DOE to complete “other steps” in our Fraud Risk Framework before deciding to design and implement additional analytics. However, practices in the Fraud Risk Framework are not necessarily meant to be sequential or interpreted as a step-by-step process. According to the Fraud Risk Framework, effective fraud risk managers collect and analyze data on identified fraud trends and use them to improve fraud risk management activities. For instance, managers may revise data analytic tests based on identified fraud schemes to better identify these schemes in the future. However, because DOE has not employed data analytics, as discussed previously, the department has not benefitted from the experience of designing, implementing, and improving its analytic procedures. As a result, the department is not well positioned to implement the requirements of the Fraud Reduction and Data Analytics Act of 2015. DOE’s approach to managing its risk of fraud and other improper payments relies on traditional cost-surveillance procedures, which include prepayment invoice reviews for its non-M&O contracts and post payment incurred cost audits for both its M&O and non-M&O contracts. The effectiveness of DOE’s approach, however, is hampered by shortcomings in control activities (policies and procedures). Without a department-wide invoice review policy or well-documented procedures, DOE management does not have assurance that invoice reviews are being performed or that these control activities are operating as intended. In addition, DOE has not used leading practices in its approach to managing its risk of fraud and other improper payments. In particular, DOE has not (1) created a structure with a dedicated entity to lead fraud risk management activities; (2) conducted fraud risk assessments that are tailored to its programs in order to develop a fraud risk profile; (3) developed and documented a strategy to mitigate assessed fraud risks; or (4) designed and implemented specific control activities, such as data analytics, to prevent and detect fraud and other improper payments. Without implementing these selected leading practices for managing its risk of fraud, DOE is missing an opportunity to organize and focus its resources in a way that would allow the department to mitigate the likelihood and impact of fraud. Finally, in applying data analytics to data from selected DOE contracts, our work demonstrated that with complete data that are sufficiently detailed, data analytics can be used to efficiently and more comprehensively monitor contractor costs. However, much of the cost data we requested from one DOE contractor and some data from the other were not sufficiently detailed for applying data analytics. DOE has not required that its contractors maintain sufficiently detailed transaction- level cost data that are reconcilable with amounts charged to the government. Without requiring contractors to maintain such data— including cost data that, at a minimum, represent a full data population and contain the details necessary to determine the nature of each cost transaction—DOE will not be well positioned to meet the requirements of the Fraud Reduction and Data Analytics Act of 2015 and employ data analytic techniques as a means to more efficiently monitor contractor costs and manage its risk of fraud and other improper payments. We recommend that the Secretary of Energy take the following six actions. To allow DOE management to effectively monitor invoice reviews and have assurance that this control activity is operating as intended, establish a DOE-wide invoice review policy that includes requirements for sites to establish well-documented invoice review operating procedures. To help DOE take a more strategic approach to managing improper payments and risk, including fraud risk, implement the following leading practices for managing the department’s risk of fraud: create a structure with a dedicated entity within DOE to design and oversee fraud risk management activities; conduct fraud risk assessments that are tailored to each program and use the assessments to develop a fraud risk profile; develop and document an antifraud strategy that describes the programs’ approaches for addressing the prioritized fraud risks identified during the fraud risk assessment; and design and implement specific control activities, including fraud awareness training and data analytics, to prevent and detect fraud and other improper payments. To help ensure that necessary data are available to employ data analytics as a tool to perform contractor cost-surveillance activities, require contractors to maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to the government, including cost data that, at a minimum, represent a full data population and the details necessary to determine the nature of each cost transaction, with such identifiers as transaction date, dollar amount, item or service description, and transaction codes to indicate the type of cost represented (e.g., construction materials, property lease, and office supplies). We provided DOE with a draft of this report for its review and comment. DOE provided written comments, which are reproduced in appendix III, and technical comments that were incorporated as appropriate. In its written comments, DOE generally concurred in principle with five recommendations but did not concur with the sixth, which is aimed at ensuring that DOE has the necessary data available to employ data analytics. DOE’s Office of Inspector General (OIG) also provided written comments, which are reproduced in appendix IV. We incorporated some of the OIG’s suggested language regarding their role in the Cooperative Audit Strategy. DOE generally concurred in principle with five of our recommendations. In its letter, DOE agreed to (1) establish a DOE-wide invoice review policy that includes requirements for sites to establish well- documented invoice review operating procedures; (2) create a structure with a dedicated entity within DOE to design and oversee fraud risk management activities—but stated that it will have to consider the cost, benefits, and need for a separate organization before implementing a dedicated antifraud entity to design and oversee fraud risk management activities; (3) conduct fraud risk assessments that are tailored to each program and use the assessments to develop a fraud risk profile; (4) develop and document an antifraud strategy that describes the programs’ approaches for addressing the prioritized fraud risks identified during the fraud risk assessment; and (5) design and implement specific control activities, including fraud awareness training and data analytics, to prevent and detect fraud and other improper payments. DOE states that it has already, or is in the process of, implementing each of these five recommendations. We will continue to monitor DOE’s efforts to implement these changes and address our recommendations. In its letter, DOE did not concur with our sixth recommendation to require contractors to maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to the government. In its letter, DOE states that it does not concur with this recommendation because the recommendation establishes agency-specific requirements for DOE contractors that are more prescriptive than current federal requirements and that its M&O contractors, not DOE, are responsible for performing data analytics and determining what data are needed to do so. Based on DOE’s response we are concerned that it does not fully appreciate its responsibility for overseeing contractor costs. Specifically: DOE disagreed with our recommendation because it asserted that implementing the recommendation would require DOE to establish agency-specific requirements for DOE contractors that are more prescriptive than current federal requirements. However, under the FAR agencies are authorized to establish their own agency-specific requirements governing contracts. Under federal internal control standards, managers should use quality information to achieve the entity’s objectives. To do this, managers may identify information requirements, obtain relevant data from reliable internal and external sources, and process data into information that is appropriate, current, complete, accurate, accessible, and provided on a timely basis. DOE also stated that its fiscal year 2017 internal control evaluations guidance requires M&O contractors to apply data-analytics, as appropriate, and that federal employees assess the contractors’ implementation of fraud risk activities, such as the use of data analytic tools to identify fraud risk factors. DOE’s letter, however, does not acknowledge that it has a responsibility for employing data-analytics under the Fraud Reduction and Data Analytics Act of 2015. Instead, DOE’s letter states that under the M&O contracting model, the contractor is responsible for performing data-analytics. The act— which is intended to improve federal agencies' development and use of data analytics for the purpose of identifying, preventing, and responding to fraud, including improper payments—does not specifically authorize DOE (or any other agency) to delegate its fraud management responsibilities to a contractor or any other nonfederal entity. The use of some data analytic techniques by its contractors does not relieve DOE of its responsibility to establish and maintain an effective fraud risk management framework. In addition, as we discuss in our report, the one M&O contractor we examined was unable to produce data that were suitable for data-analytic techniques to produce meaningful results. We continue to believe that the use of data-analytic techniques by DOE employees could help mitigate some of the challenges that limit the effectiveness of DOE’s approach for overseeing M&O contractor costs. However, effectively applying data- analytics is dependent upon the availability of complete and sufficiently detailed contractor data. Therefore, we continue to believe that DOE needs to implement our recommendation and require contractors to maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to the government. Although DOE did not concur with our sixth recommendation, DOE’s letter states that it will discuss the merits of government-wide guidance for applying data-analytics to contract costs with the data-analytics working group that OMB is required to establish as part of the Fraud Reduction and Data Analytics Act of 2015. DOE stated that if the working group determines that there is a need for contractors to retain and provide additional data to support data analytic procedures, any proposed new requirement should be discussed with the FAR Council; the OMB Office of Federal Procurement Policy; and potentially, the OMB Office of Intergovernmental and Regulatory Affairs. However, the purpose of the working group is to share "financial and administrative controls" and "data-analytics techniques". In other words, this is an information sharing entity to facilitate the sharing of fraud management best practices. It is not an implementing body, and agencies do not need its permission before proceeding with fraud risk reduction efforts. The law does not prohibit DOE (or any other agency) from acting unless and until there is interagency consensus on an issue. In addition to DOE’s response to our recommendations, DOE’s letter states that the department is concerned with the accuracy of statements throughout the report. Specifically, DOE states that it has invoice review procedures and uses data analytics in its internal control processes. We disagree. As we discuss in our report, officials with the Office of the CFO at DOE headquarters told us that DOE does not have department-wide invoice review policies and procedures. Instead, according to these officials, field CFOs and contracting officials are responsible for developing appropriate invoice review policies and procedures. Notably, in our query of all DOE sites, we found that most did not have well- documented invoice review procedures. Regarding the use of data- analytics, DOE officials stated that DOE’s contractors use some data- analytic techniques. However, as we discuss in our report, most DOE sites in our query of all sites do not use data-analytics. Further, as discussed in the report, we reviewed one of DOE’s large M&O contractors and found that cost data is not maintained in a way to support comprehensive data analysis and neither the contractor nor DOE was doing such analyses. In its letter, DOE also states that the report should acknowledge DOE’s compliance with requirements in effect at the time of our review. Our work was not designed as a compliance audit to test the effectiveness of DOE’s internal financial controls. Our report examined the extent to which DOE’s approach to managing its risk of fraud and other improper payments incorporates leading practices, such as the use of data analytics. We do not assert in our report that the leading practices included in GAO’s Fraud Risk Framework are requirements. However, as we discuss in our report, by not incorporating these leading practices, DOE is missing an opportunity to organize and focus its resources in a way that would allow the department to mitigate the likelihood and impact of fraud. As agreed to with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. To examine the Department of Energy’s (DOE) approach to managing its risk of fraud and other improper payments and challenges, if any, that may limit the effectiveness of this approach, we took the following steps. We reviewed the Federal Acquisition Regulation (FAR), Office of Management and Budget (OMB) requirements and Presidential memorandums, federal legislation regarding improper payments, our Standards for Internal Control in the Federal Government, our A Framework for Managing Fraud Risk in Federal Programs, and standards and guidance of the Institute of Internal Auditors to identify federal requirements and best practices for prevention and detection of fraud and other improper payments. To identify DOE’s agency-wide approach to managing its risk of fraud and improper payments, including key internal controls over financial and accounting operations and for contractor oversight, we reviewed DOE regulations, directives, procedures, and guidance, and we interviewed DOE officials from headquarters organizations, including the Office of the Chief Financial Officer (CFO), the office of the Chief Risk Officer, the Office of Acquisition Management, and the Office of Inspector General (OIG). To identify DOE’s approach to managing its risk of fraud and improper payments in its field locations, we developed a semi structured interview, which was administered to officials at DOE field locations that oversee at least one prime contractor. Through review of DOE documents and discussions with officials in the Office of the CFO, we identified 10 field office locations responsible for oversight of at least one prime contractor; and we determined that 6 of those sites oversaw at least one non-M&O contractor. To develop the interview questions, we reviewed OMB Circular A-123, federal internal control standards, and the Fraud Risk Framework provided in and identified key controls and leading practices for prevention and detection of fraud and other improper payments. We pretested interview questions and made changes to the interview guide as appropriate; we conducted these semi structured interviews with DOE’s field CFOs, contracting officers, and major contractors at each site. We also collected DOE policies and procedures for oversight and review of contractor costs from each site. We analyzed DOE and contractor responses and information provided through the semi structured interview process and summarized DOE’s approach to managing its risk of fraud and improper payments in its field locations. To gain an in-depth understanding of the local DOE processes for oversight of contractors’ costs, we visited DOE’s Hanford Office in Washington State and the National Nuclear Security Administration’s (NNSA) Office of Financial Performance in Albuquerque, New Mexico, and held discussions with DOE officials responsible for financial and administrative oversight of prime contractors at the sites. To identify challenges to DOE’s approach, we reviewed DOE internal assessments, OIG reports, and a DOE-commissioned study on DOE’s contract administration practices. We also interviewed officials from the DOE OIG audit and investigations units in headquarters and in the field to further identify and discuss additional challenges DOE faces in using its approach. To examine the extent to which DOE’s approach incorporates leading practices, such as the use of data analytics, through our review of standards and guidance of the Institute of Internal Auditors, federal internal control standards, and our Fraud Risk Framework, we identified key leading practices for managing the risk of fraud and improper payments in the federal government. The Fraud Risk Framework consists of four components—commit, assess, design and implement, and evaluate and adapt—each of which are overarching fraud risk management concepts and leading practices for carrying out the overarching concept. To ensure that we had a cross section of leading practices, we selected at least one leading practice from each component of the Fraud Risk Framework: commit to combating fraud by creating an organizational culture and structure that is conducive to fraud risk management, plan regular fraud risk assessments and assess risks to determine a fraud risk profile, and design and implement a strategy with specific control activities to mitigate assessed fraud risks. After determining that DOE had not adopted fraud risk management activities that incorporated leading practices from the first three components, we did not assess whether DOE was evaluating and adapting its use of leading fraud risk management practices. The leading practices we selected from each component were selected because the use of these practices could be objectively verified. We then compared DOE’s approach to managing its risk of fraud and improper payments, including our analyses and summary of its policies and procedures for oversight of its contractors, with the key leading practices and identified similarities and differences between these practices and DOE’s approach. To examine the application of data analytics in identifying potential indicators of fraud or other improper payments associated with selected DOE contracts, we planned to review costs charged to DOE by one management and operating (M&O) contractor and one non-M&O contractor. We selected these contractors for in-depth review based on type of contractor, contract size in dollars, and ease of access of contractor data. Specifically, we selected one M&O and one non-M&O contract to review because these types of contracts charge costs to DOE differently and we wanted to capture this variation in our review. We chose two contracts that were large in terms of dollars charged to DOE in order to have two large data sets with many types of expenses to analyze. We selected the non-M&O contractor at the Hanford Site for ease of access to the data and proximity to our offices for follow-up on data questions and issues. We selected the M&O contractor because it is co-located with DOE’s NNSA Office of Financial Performance, the field office responsible for oversight of all NNSA contractors, also making it much easier to follow up on data questions and issues. We requested 3 years of cost data charged to DOE by each contractor during fiscal years 2013 through 2015. Non-M&O contractor analysis. We requested data from Bechtel National, Inc., the non-M&O contractor responsible for the design and construction of the Waste Treatment Plant at DOE’s Hanford site. DOE provided the requested cost data for the non-M&O contractor in 72 files, and each file was separated into two types of costs: labor costs and other direct costs. These files contained fields regarding the natural class, source reference number and descriptions, cost accounting code, control account description, and others. We combined these data into two data sets, one set for labor costs and one set for other direct costs. To determine the reliability of these data, we (1) conducted a series of interviews with DOE officials responsible for the data to understand how the data are maintained and verified; (2) performed data testing, including checking totals in the data against control totals provided by the agency, as well as examination of outliers and missing data, and (3) reviewed the data dictionary. We determined the data were sufficiently reliable for the purposes of this engagement. We performed a variety of analyses of these data, including examining distributions of variables, classification of costs into categories, cross-tabulation, and trend analysis. For example, we summarized both the labor and other direct costs data by type of cost. We reviewed the results of these analyses and identified certain costs that could potentially be unallowable as defined in the FAR and that warranted further review. For some of the potentially unallowable costs we identified, we examined the details of the transactions to help us to identify the type and/or purpose of the costs represented. To validate our findings, we provided a detailed briefing to DOE on the results of our analyses and at that time we requested additional information about the purpose and allowability of the potentially unallowable costs we identified. DOE did not respond to our request to provide us with this information. DOE did, however, provide us with a file of individual cost transactions that it examined in connection with its review of the 72 invoice files from fiscal years 2013 through 2015. We performed a variety of analyses of these data, including, for example, classification of the costs into categories and cross-tabulating this information with the labor and other direct costs data summaries. M&O contractor analysis. We requested data from Sandia Corporation, the M&O contractor responsible for managing and operating Sandia National Laboratories. DOE was unable to provide us with the requested data in a format that was suitable for analysis. Specifically, DOE was unable to provide the data because the contractor tracked costs by project in several sub-accounting systems, and the contractor could not produce a full data population of sufficiently detailed transaction-level data for the costs it incurred and claimed during the fiscal years 2013 through 2015 time frame we examined. In addition, the contractor did not identify costs by the cost types identified in the FAR. The M&O contractor that operates Sandia National Laboratories was unable to produce a full data population of sufficiently detailed transaction-level data for any of the over $8 billion in costs it incurred and claimed during the fiscal years 2013 through 2015 time frame we examined. According to representatives of the M&O contractor and documents it provided, the contractor’s core accounting system generates financial information for both internal and external use through the use of project accounting and general ledger modules. Specifically, the contractor’s project accounting module generates information for internal management use, and the general ledger module generates information for external reporting purposes. However, neither the project accounting nor the general ledger module contained transaction-level cost data suitable for data analytics. The information contained in the contractor’s project accounting module did not have the expenditure detail needed to effectively perform data analytics, according to documents provided by the contractor. The contractor’s project accounting module tracks four cost categories: labor, chargebacks, travel, and purchases. Costs within each of these cost categories were not further identified by expense type, such as construction materials, property lease, or office supplies. The information contained within the contractor’s general ledger module also did not contain the expenditure detail needed to effectively perform data analytics. According to representatives of the M&O contractor and documents they provided, the contractor’s general ledger system is not set up to function like the ledgers used by nongovernmental businesses. A general ledger system, according to the contractor, would normally contain detailed information that would define the expenditure type and associated detail of the expenditure that could then allow analytics to be performed. Furthermore, the contractor told us that it does not produce financial statements and DOE does not require its contractors to report transactional detail to support the agency’s preparation of consolidated financial statements. Consequently, the contractor’s general ledger system does not contain the detailed information needed to allow analytics to be performed. According to representatives of the M&O contractor and documents they provided, although transaction-level cost data are not maintained in the project accounting or general ledger modules, detailed cost information is found in several of the contractor’s sub-accounting systems. Specifically, they said that the M&O contractor maintains several sub-accounting systems that separately process and capture transactions by type, such as travel, purchase card, and employee expense voucher systems. Data from the sub-accounting systems are summarized and used to populate the contractor’s project accounting and general ledger modules, according to contractor representatives and documents provided. Notably, the M&O contractor at Sandia does not meet the financial management system standards it sets for prospective subcontractors. According to the M&O contractor’s guidance for prospective subcontractors, an adequate accounting system must be able to collect, process, and report costs. It should be able to break out costs by cost element, and cost elements used should be easily traceable to the general ledger and the financial statements. As discussed above, the M&O contractor’s financial system does not enable cost elements to be easily traceable to the contractor’s general ledger, and the contractor does not produce corporate financial statements. Representatives of the contractor told us that they have processes and controls in place that ensure that cost information from their subsystems reconciles with amounts charged to DOE. However, documentation the contractor provided us regarding costs contained in each subsystem did not reconcile with amounts included on the contractor’s statement of costs incurred and claimed, and contractor officials could not confirm that the transactional expenditures pulled from the sub-accounting systems were reconciled with amounts charged to DOE. Instead, these officials suggested that we use data analytics on the subset of data contained in each of the sub-accounting systems—an approach they told us they use to ensure that the financial information they are reporting to DOE is proper. Unless the transactional expenditures pulled from the contractor’s sub-accounting systems are reconciled with amounts charged to DOE, however, there is no assurance that the data are complete. Without complete data, meaningful analysis using data analytics is not possible. According to DOE’s contract with the M&O contractor, the contractor’s financial management systems are to be responsive to the responsibilities of sound financial stewardship and public accountability. The overall system is to include an integrated accounting system suitable to collect, record, and report all financial activities; a budgeting system for the formulation and execution of resource requirements; a disbursements system for employee payroll and supplier payments; and an effective internal control system for all expenditures. Given the difficulty in producing transaction-level cost data that are reconcilable to the amounts charged to DOE, it is unclear how DOE ensures that the M&O contractor at Sandia meets these requirements. In addition to the contact named above, Diane LoFaro (Assistant Director), David Dornish, Farrah Graham, Mark Keenan, Courtney Liesener, Andrew Moore, and Kathryn Pedalino made key contributions to this report.
Over the past decade, incidents of fraud by DOE contractors have occurred. From 2003 through 2008, employees of one contractor at DOE's Hanford site in Washington state made hundreds of fraudulent purchases and solicited and received kickbacks. In another case, Hanford contractors agreed to pay a combined $125 million to settle disputed claims regarding federal dollars spent on nonnuclear-compliant parts. To help federal program managers combat fraud, in July 2015, GAO issued leading practices for managing fraud risks. GAO was asked to review DOE's processes, programs, and practices for managing its risk of fraud. This report examines (1) DOE's approach to managing its risk of fraud and other improper payments and challenges, if any, that may limit the effectiveness of this approach; (2) the extent to which DOE's approach incorporates leading practices; and (3) the application of data analytics in identifying potential indicators of fraud or other improper payments associated with selected DOE contracts. The Department of Energy (DOE) manages the risk of fraud and improper payments through its internal controls program, which includes, among other things, prepayment invoice reviews and post payment audits. However, several challenges limit the effectiveness of this approach. For example, DOE does not have a department-wide invoice review policy or well-documented procedures at five of the six sites with invoice review responsibilities. Consequently, DOE has no assurance that control activities at these sites are operating as intended. Time constraints also limit the effectiveness of invoice reviews. For example, some invoices can have numerous associated transactions and the reviews must be completed within a limited time frame before payment, which may be as short as 10 days. DOE's approach to managing fraud risk does not incorporate leading practices such as creating a dedicated antifraud entity to lead fraud risk management activities; conducting regular fraud risk assessments that are tailored to the program; developing and documenting a strategy to mitigate assessed fraud risks; or designing and implementing specific control activities, such as data analytic activities, to prevent and detect fraud. By not implementing leading practices, DOE is missing an opportunity to organize and focus its resources in a way that would allow it to mitigate the likelihood and impact of fraud. Moreover, the Fraud Reduction and Data Analytics Act of 2015 establishes requirements aimed at improving federal agencies' controls and procedures for assessing and mitigating fraud risks through the use of data analytics. The legislation also directs the Office of Management and Budget (OMB) to, among other things, establish implementation guidelines that incorporate fraud risk management leading practices. DOE officials told GAO that they plan to meet the requirements of the act but should not be expected to implement private industry leading practices prior to the issuance of OMB guidance. Incorporating leading practices could also help DOE more effectively implement the act's requirements once OMB guidance is available. It is not possible to fully employ data analytics as a tool to identify potential indicators of fraud or other improper payments at DOE because of limitations in contractor-maintained cost data. Much of the cost data maintained by the two DOE contractors GAO selected for data analytic purposes could not be used because these data did not include a complete universe of transactions that was reconcilable with amounts billed to DOE or did not contain details necessary to determine the nature of costs charged to DOE. Because DOE does not require its contractors to maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to DOE, it is not well positioned to employ data analytics as a fraud detection tool. Effective fraud risk managers collect and analyze data and identify fraud trends and use them to improve fraud risk management activities, according to leading practices that GAO has previously identified. Without the detailed data necessary to conduct such analysis, DOE is missing an opportunity to develop, refine, and improve its experience with data analytic tools and techniques, and better position itself to meet the requirements of the Fraud Reduction and Data Analytics Act. GAO is making six recommendations, including that DOE establish invoice review policies and procedures, employ leading practices such as data analytics to help manage fraud risk, and require that its contractors maintain sufficiently detailed cost data for reconciling with amounts charged. DOE generally concurred with five of GAO's six recommendations but did not agree to require contractors to maintain detailed data. GAO continues to believe that the recommendation is valid, as discussed in the report.
Patch management is a critical process used to help alleviate many of the challenges involved with securing computing systems from attack. A component of configuration management, it includes acquiring, testing, applying, and monitoring patches to a computer system. Flaws in software code that could cause a program to malfunction generally result from programming errors that occur during software development. The increasing complexity and size of software programs contribute to the growth in software flaws. For example, Microsoft Windows 2000 reportedly contains about 35 million lines of code, compared with about 15 million lines for Windows 95. As reported by the National Institute of Standards and Technology (NIST), based on various studies of code inspections, most estimates suggest that there are as many as 20 flaws per thousand lines of software code. While most flaws do not create security vulnerabilities, the potential for these errors reflects the difficulty and complexity involved in delivering trustworthy code. From 1995 through 2003, the CERT Coordination Center (CERT/CC) reported just under 13,000 security vulnerabilities that resulted from software flaws. Figure 1 illustrates the dramatic growth in security vulnerabilities during this period. As vulnerabilities are discovered, attackers can cause major damage in attempting to exploit them. This damage can range from defacing Web sites to taking control of entire systems and thereby being able to read, modify, or delete sensitive information; destroy systems; disrupt operations; or launch attacks against other organizations’ systems. Attacks can be launched against specific targets or widely distributed through viruses and worms. The sophistication and effectiveness of cyber attacks have steadily advanced. According to security researchers, reverse-engineering patches has become a leading method for exploiting vulnerabilities. By using the same tools used by programmers to analyze malicious code and perform vulnerability research, hackers can locate the vulnerable code in unpatched software and build to exploit it. Reverse engineering starts by locating the files or code that changed when a patch was installed. Then, by comparing the patched and unpatched versions of those files, a hacker can examine the specific functions that changed, uncover the vulnerability, and exploit it. A spate of new worms has been released since February—most recently last month—and more than half a dozen new viruses were unleashed. The worms were variants of the Bagle and Netsky viruses. The Bagle viruses typically included an infected e-mail attachment containing the actual virus; the most recent versions have protected the infected attachment with a password, preventing anti-virus scanners from examining it. The recent Netsky variants attempted to deactivate two earlier worms and, when executed, reportedly make a loud beeping sound. Another worm known as Sasser, like the Blaster worm discussed later, exploits a vulnerability in the Microsoft Windows operating system, while the Witty worm exploits a flaw in certain Internet security software products. The number of computer security incidents within the past decade has risen in tandem with the dramatic growth in vulnerabilities, as the increased number of vulnerabilities provides more opportunities for exploitation. CERT/CC has reported a significant growth in computer security incidents—from about 9,800 in 1999 to over 82,000 in 2002 and over 137,500 in 2003. And these are only the reported attacks. The director of the CERT Centers has estimated that as much as 80 percent of actual security incidents go unreported, in most cases because there were no indications of penetration or attack, the organization was unable to recognize that its systems had been penetrated, or the organization was reluctant to report the attack. Figure 2 shows the number of incidents reported to CERT/CC from 1995 through 2003. According to CERT/CC, about 95 percent of all network intrusions could be avoided by keeping systems up to date with appropriate patches; however, such patches are often not quickly or correctly applied. Maintaining current patches is becoming more difficult, as the length of time between the awareness of a vulnerability and the introduction of an exploit is shrinking. For example, the recent Witty worm was released only a day after the announcement of the vulnerability it attacked. As figure 3 illustrates, in the last 3 years, the time interval between the announcement of a particular vulnerability and the release of its associated worm has diminished dramatically. Although the economic impact of a cyber attack is difficult to measure, a recent Congressional Research Service study cites members of the computer security industry as estimating that worldwide, major virus attacks in 2003 cost $12.5 billion. They further project that economic damage from all forms of digital attacks in 2004 will exceed $250 billion. Following are examples of significant damage caused by worms that could have been prevented had the available patches been effectively installed: ● On January 25, 2003, Slammer reportedly triggered a global Internet slowdown and caused considerable harm through network outages and other unforeseen consequences. As discussed in our April 2003 testimony on the security of federal systems and critical infrastructures, the worm reportedly shut down a 911 emergency call center, canceled airline flights, and caused automated teller machine failures. According to media reports, First USA Inc., an Internet service provider, experienced network performance problems after an attack by the Slammer worm, due to a failure to patch three of its systems. Additionally, the Nuclear Regulatory Commission reported that Slammer also infected a nuclear power plant’s network, resulting in the inability of its computers to communicate with each other, disrupting two important systems at the facility. In July 2002, Microsoft had released a patch for its software vulnerability that was exploited by Slammer. Nevertheless, according to media reports, Slammer infected some of Microsoft’s own systems. Reported cost estimates of Slammer damage range between $1.05 billion and $1.25 billion. ● On August 11, 2003, the Blaster worm was launched to exploit a vulnerability in a number of Microsoft Windows operating systems. When successfully executed, it caused the operating system to fail. Although the security community had received advisories from CERT/CC and other organizations to patch this critical vulnerability, Blaster reportedly infected more than 120,000 unpatched computers in its first 36 hours. By the following day, reports began to state that many users were experiencing slowness and disruptions to their Internet service, such as the need to reboot frequently. The Maryland Motor Vehicle Administration was forced to shut down, and systems in both national and international arenas were also affected. Experts consider Blaster, which affected a range of systems, to be one of the worst exploits of 2003. Microsoft reported that the Blaster worm has infected at least 8 million Windows computers since last August. ● On May 1 of this year, the Sasser worm was reported, which exploits a vulnerability in the Windows Local Security Authority Subsystem Service component. This worm can compromise systems by allowing a remote attacker to execute arbitrary code with system privileges. According to US-CERT (the United States Computer Emergency Readiness Team), systems infected by this worm may suffer significant performance degradation. Sasser, like last year’s Blaster, exploits a vulnerability in a component of Windows by scanning for vulnerable systems. Estimates by Internet Security Systems, Inc., place the Sasser infections at 500,000 to 1 million machines. Microsoft has reported that 9.5 million patches for the vulnerability were downloaded from its Web site in just 5 days. The federal government has taken several steps to address security vulnerabilities that affect agency systems, including efforts to improve patch management. Specific actions include (1) requiring agencies to annually report on their patch management practices as part of their implementation of FISMA, (2) identifying vulnerability remediation as a critical area of focus in the President’s National Strategy to Secure Cyberspace, and (3) creating US–CERT. FISMA permanently authorized and strengthened the information security program, evaluation, and reporting requirements established for federal agencies in prior legislation. In accordance with OMB’s reporting instructions for FISMA implementation, maintaining up-to-date patches is part of system configuration management requirements. The 2003 FISMA reporting instructions that specifically address patch management practices include agencies’ status on (1) developing an inventory of major IT systems, (2) confirming that patches have been tested and installed in a timely manner, (3) subscribing to a now-discontinued governmentwide patch notification service, and (4) addressing patching of security vulnerabilities in configuration requirements. The President’s National Strategy to Secure Cyberspace was issued on February 14, 2003, to identify priorities, actions, and responsibilities for the federal government—as well as for state and local governments and the private sector—with specific recommendations for action to DHS. This strategy identifies the reduction and remediation of software vulnerabilities as a critical area of focus. Specifically, it identifies the need for (1) a better– defined approach on disclosing vulnerabilities, to reduce their usefulness to hackers in launching an attack; (2) creating common test beds for applications widely used among federal agencies; and (3) establishing best practices for vulnerability remediation in areas such as training, use of automated tools, and patch management implementation processes. US-CERT was created last September by DHS’s National Cyber Security Division (NCSD) in conjunction with CERT/CC and the private sector. Specifically, US-CERT is intended to aggregate and disseminate cyber security information to improve warning and response to incidents, increase coordination of response information, reduce vulnerabilities, and enhance prevention and protection. This free service—which includes notification of software vulnerabilities and sources for applicable patches—is available to the public, including home users and both government and nongovernment entities. Common patch management practices—such as establishing and enforcing standardized policies and procedures and developing and maintaining a current technology inventory—can help agencies establish an effective patch management program and, more generally, assist in improving an agency’s overall security posture. Our survey results showed that the 24 agencies are implementing some practices for effective patch management, but not others. Specifically, all report that they have some level of senior executive involvement in the patch management process and cited the chief information security officer (CISO) as being the individual most involved in the patch management process. The CISO is involved in managing risk, ensuring that appropriate resources are dedicated, training computer security staff, complying with policies and procedures, and monitoring the status of patching activities. Other areas in which agencies report implementing common patch management practices are in performing a systems inventory and providing information security training. All 24 agencies reported that they develop and maintain an inventory of major information systems as required by FISMA and do so using a manual process, an automated tool, or an automated service. Additionally, most of the 24 agencies reported that they provide both on-the-job and classroom training in computer security, including patch management, to system owners, administrators, and IT security staff. However, agencies are inconsistent in developing patch management policies and procedures, testing of patches, monitoring systems, and performing risk assessments. Specifically, not all agencies have established patch management policies and procedures. Eight of the 24 surveyed agencies report having no policies and 10 do not have procedures in place. Additionally, most agencies are not testing all patches before deployment. Although all 24 surveyed agencies reported that they test some patches against their various systems configurations before deployment, only 10 agencies reported testing all patches, and 15 agencies reported that they do not have any testing policies in place. Moreover, although all 24 agencies indicated that they perform some monitoring activities to assess their network environments and determine whether patches have been effectively applied, only 4 agencies reported that they monitor all of their systems on a regular basis. Further, just under half of the 24 agencies said they perform a documented risk assessment of all major systems to determine whether to apply a patch or an alternative workaround. Without consistent implementation of patch management practices, agencies are at increased risk of attacks that exploit software vulnerabilities in their systems. More refined information on key aspects of agencies’ patch management practices—such as their documentation of patch management policies and procedures and the frequency with which systems are monitored to ensure that patches are installed—could provide OMB, Congress, and agencies themselves with data that could better enable an assessment of the effectiveness of an agency’s patch management processes. Several automated tools and services are available to assist agencies with patch management. A patch management tool is an application that automates a patch management function, such as scanning a network and deploying patches. Patch management services are third-party resources that provide services such as notification, consulting, and vulnerability scanning. Tools and services can make the patch management process more efficient by automating otherwise time-consuming tasks, such as keeping current on the continuous flow of new patches. Commercially available tools and services include, among others, methods to inventory computers and the software applications and patches installed; identify relevant patches and workarounds and gather them in one location; group systems by departments, machine types, or other logical manage patch deployment; scan a network to determine the status of patches and other corrections made to network machines (hosts and/or clients); assess machines against set criteria, including required system configurations; access a database of patches; report information to various levels of management about the status of the network. In addition to automated tools and services, agencies can use other methods to assist in their patch management activities. For example, although labor-intensive, they can maintain a database of the versions and latest patches for each server and each client in their network, and track the security alerts and patches manually. Agencies can also employ systems management tools with patch- updating capabilities to deploy the patches. This method requires that agencies monitor for the latest security alerts and patches. Further, software vendors may provide automated tools with customized features to alert system administrators and users of the need to patch and, if desired, to automatically apply patches. We have previously reported on FedCIRC’s Patch Authentication and Dissemination Capability (PADC), a service initiated in February 2003 to provide users with a method of obtaining information on security patches relevant to their enterprise and access to patches that had been tested in a laboratory environment. According to FedCIRC officials, this service was terminated on February 21, 2004, for a variety of reasons, including low levels of usage. In the absence of this service, agencies are left to independently perform all components of effective patch management. A centralized resource that incorporates lessons learned from PADC’s limitations could provide standardized services, such as testing of patches and a patch management training curriculum. Security experts and agency officials have identified several obstacles to implementing effective patch management; these include the following: ● High volume and increasing frequency of patches. Several of the agencies we surveyed indicated that the sheer quantity and frequency of needed patches posed a challenge to the implementation of the recommended patch management practices. As increasingly virulent computer worms have demonstrated, agencies need to keep systems updated with the latest security patches. ● Patching heterogeneous systems. Variations in platforms, configurations, and deployed applications complicate agencies’ patching processes. Further, their unique IT infrastructures can make it challenging for agencies to determine which systems are affected by a software vulnerability. ● Ensuring that mobile systems receive the latest patches. Mobile computers—such as laptops, digital tablets, and personal digital assistants—may not be on the network at the right time to receive appropriate patches that an agency deploys and are at significant risk of not being patched. ● Avoiding unacceptable downtime when patching systems that require high availability. Reacting to new security patches as they are introduced can interrupt normal and planned IT activities, and any downtime incurred during the patching cycle interferes with business continuity, particularly for critical systems that must be continuously available. ● Dedicating sufficient resources to patch management. Despite the growing market of patch management tools and services that can track machines that need patches and automate patch downloads from vendor sites, agencies noted that effective patch management is a time-consuming process that requires dedicated staff to assess vulnerabilities and test and deploy patches. As with the challenges to patch management identified by agencies, our report also identified a number of steps that can be taken to address the risks associated with software vulnerabilities. These include: ● Better software engineering. More rigorous engineering practices, including a formal development process, developer training on secure coding practice, and code reviews, can be employed when designing, implementing, and testing software products to reduce the number of potential vulnerabilities and thus minimize the need for patching. ● Implementing “defense-in-depth.” According to security experts, a best practice for protecting systems against cyber attacks is for agencies to build successive layers of defense mechanisms at strategic points in their IT infrastructures. This approach, commonly referred to as defense-in-depth, entails implementing a series of protective mechanisms such that if one fails to thwart an attack, another will provide a backup defense. ● Using configuration management and contingency planning. Industry best practices and federal guidance recognize the importance of configuration management when developing and maintaining a system or network to ensure that additions, deletions, or other changes to a system do not compromise the system’s ability to perform as intended. Contingency plans provide specific instructions for restoring critical systems, including such elements as arrangements for alternative processing facilities, in case usual facilities are significantly damaged or cannot be accessed due to unexpected events such as temporary power failure, accidental loss of files, or major disaster. ● Ongoing improvements in patch management tools. Security experts have noted the need for improving currently available patch management tools. Several patch management vendors have been working to do just that. ● Research and development of new technologies. Software security vulnerabilities can also be addressed through the research and development of automated tools to uncover hard-to- see security flaws in software code during the development phase. ● Federal buying power. The federal government can use its substantial purchasing power to demand higher quality software that would hold vendors more accountable for security defects in released products and provide incentives for vendors that supply low-defect products and products that are highly resistant to viruses.
Flaws in software code can introduce vulnerabilities that may be exploited to cause significant damage to federal information systems. Such risks continue to grow with the increasing speed, sophistication, and volume of reported attacks, as well as the decreasing period of the time from vulnerability announcement to attempted exploits. The process of applying software patches to fix flaws--patch management--is critical to helping secure systems from attacks. At the request of the House Committee on Government Reform and the Subcommittee on Technology, Information Policy, Intergovernmental Relations, and the Census, GAO reviewed the (1) reported status of 24 selected agencies in performing effective patch management practices, (2) tools and services available to federal agencies, (3) challenges to this endeavor, and (4) additional steps that can be taken to mitigate risks created by software vulnerabilities. This testimony highlights the findings of GAO's report, which is being released at this hearing. Agencies are generally implementing certain common patch management-related practices, such as inventorying their systems and providing information security training. However, they are not consistently implementing other common practices. Specifically, not all agencies have established patch management policies and procedures. Moreover, not all agencies are testing all patches before deployment, performing documented risk assessments of major systems to determine whether to apply patches, or monitoring the status of patches once they are deployed to ensure that they are properly installed. Commercial tools and services are available to assist agencies in performing patch management activities. These tools and services can make patch management processes more efficient by automating time-consuming tasks, such as scanning networks and keeping up-to-date on the continuous releases of new patches. Nevertheless, agencies face significant challenges to implementing effective patch management. These include, among others, (1) the high volume and increasing frequency of needed patches, (2) patching heterogeneous systems, (3) ensuring that mobile systems such as laptops receive the latest patches, and (4) dedicating sufficient resources to assessing vulnerabilities and deploying patches. Agency officials and computer security experts have identified several additional measures that vendors, the security community, and the federal government can take to address the risks associated with software vulnerabilities. These include, among others, adopting more rigorous software engineering practices to reduce the number of coding errors that create the need for patches, implementing successive layers of defense mechanisms at strategic points in agency information systems, and researching and developing new technologies to help uncover flaws during software development.
IRS improved its 2006 filing season performance in important areas that affect large numbers of taxpayers. This continues a trend of improvement since at least 2002. Returns processing has gone smoothly and electronic filing continues to grow, although at a slower rate than in previous years. Taxpayer assistance has improved in the two most commonly used services—toll-free telephones and the Internet Web site. Fewer taxpayers visited IRS’s walk-in sites, and more sought assistance at volunteer-staffed sites. From January 1 through March 17, 2006, IRS processed about 63 million individual income tax returns, about the same number as the same period last year. Of those returns, 47 million returns were filed electronically (up 2.2 percent) and 16 million returns were filed on paper (down 9.8 percent). According to IRS data and officials, returns processing has gone smoothly so far this filing season. IRS issued 56 million refunds, 40 million, or 71 percent, of which were directly deposited, up 3 percentage points over the same period as last year. Direct deposit is faster, more convenient for taxpayers, and less expensive for IRS than mailing paper checks. Because of the volume of tax returns, it is normal for IRS to experience some processing disruptions, although this year, disruptions have not been significant. For example, 13 different tax forms were unavailable for electronic filing until February 1 due to the late hurricane relief legislation, which caused a minor processing delay for some returns. Furthermore, IRS officials said that the new Customer Account Data Engine (CADE), which is intended eventually to replace IRS’s antiquated Master File system containing taxpayer records, processed 4.3 million returns and dispersed 3.8 million refunds, so far during the 2006 filing season without disruptions. IRS is reporting that direct deposit refunds and paper check refunds are being issued within 4 and 6 business days, respectively, after tax returns are posted to CADE, which is faster than for returns processed by the Master File system. CADE’s growth in future years will directly benefit taxpayers. Not only can it speed up refunds, but it also updates taxpayer account information quicker than the Master File system. Representatives of the taxpayer industry corroborated IRS’s view that the filing season is going smoothly. Groups and organizations that we talked to included the National Association of Enrolled Agents, the American Institute of Certified Public Accountants, and others. In addition, the TIGTA recently testified that thus far it has seen no significant problems during the filing season. The growth of electronic filing is important, because it generates savings by reducing staff years needed for labor intensive paper processing. Between fiscal years 1999 and 2006, IRS reduced the number of staff years devoted to paper and electronic processing by 1,586, or 34 percent as shown in figure 1. Electronic filing continues to grow but at a slower rate than previous years. This year’s 2.4 percent rate of growth is less than the average annual rate of growth of 4.3 percent for each of the preceding 2 years. According to IRS officials, the slower growth in electronic filing this year is due, in part, to changes in the Free File program, which reduced the number of taxpayers eligible to file electronically for free this year and to reduced advertising by companies involved in that program, and the termination of the TeleFile program, which eliminated the way for taxpayers to file their returns electronically via telephone. The Free File program enables taxpayers to file their returns electronically via IRS’s Web site. Through IRS’s Web site, taxpayers can access the Web sites of 20 companies comprising the Free File Alliance. The alliance is a consortium of tax preparation companies that agreed to offer free return preparation and electronic filing for taxpayers that meet certain criteria (see app. 1 for further detail). In an amended agreement with IRS that took effect this year, the Free File Alliance set a $50,000 income limitation on taxpayer participation. This limit was absent last year and reduced the number of taxpayers eligible to participate in the program. As of March 19, 2006, IRS processed about 2.9 million free file returns, which is a decrease of 23 percent from the same period last year. This decline is inconsistent with IRS’s projection that it would receive 6 million tax returns filed through the Free File program, almost a million more compared to last year. For 2006, IRS terminated the TeleFile program. IRS expected that eliminating TeleFile would reduce electronic filing, but justified the decision because of declining usage and relatively high costs. The number of taxpayers using the program had been decreasing—from approximately 5.7 million in 1999 to 3.8 million in 2004. IRS estimated the cost per tax return submitted through TeleFile, typically Form 1040EZ, to have been $2.63 versus $1.51 for a return filed on paper, largely due to contractor, telecommunications, and other costs. Given the limitations of IRS’s cost accounting system, the validity of these figures is unknown. IRS officials stated that the reason for this year’s increase in the number of 1040EZ returns filed on paper is due, in part, to the elimination of TeleFile. Through March 17, 2006, the number of 1040EZ returns has increased 18 percent from last year. Options for increasing electronic filing, in particular mandated electronic filing, will be discussed in the budget section of this statement. Taxpayers’ ability to access IRS’s telephone assistors and the accuracy of answers provided improved compared to previous years. From January 1 through March 11, 2006, IRS answered approximately 22 million phone calls, which is about a 7 percent decline from the same period as last year. The call volume has been less than projected by IRS and less than was assumed when IRS set staffing levels for telephone assistors for the filing season. IRS officials offered several explanations for the unexpected decline in call volume. One explanation is that more taxpayers are using improved tax preparation software, which reduces their need to call IRS. Another explanation is that more taxpayers are getting through to a telephone assistor the first time they call, thus reducing the need for taxpayers to call again. As shown in table 1, the percentage of taxpayers who attempted to reach an assistor and actually got through and received service—referred to as the level of service—was 84 percent so far this filing season compared to 83 percent over the same period last year—and greater than its 2006 fiscal year goal of 82 percent. According to IRS officials, one possible explanation for the improvement in access is the decline in overall call volume. When call volume decreases, taxpayers are likely to wait less time to speak with an IRS telephone assistor. As a result, fewer taxpayers would likely hang up, increasing the percentage of taxpayers who get through to an assistor. IRS also reported that, so far this filing season, the average speed of answer (length of time taxpayers wait to get their calls answered) is down 53 seconds from the same time last year to 182 seconds, a decrease of about 23 percent, and significantly better than IRS’s 2006 fiscal year goal of 300 seconds. IRS also reported that the rate at which taxpayers abandoned their calls to IRS decreased from 11.5 percent to 8.9 percent compared to the same period last year. Using a statistical sampling process, IRS estimates that the accuracy of telephone assistors’ responses to taxpayers’ tax law and account questions improved compared to last year. IRS estimates its tax law accuracy rate to be 90.2 percent, an increase of 2.7 percentage points over the same time period last year, continuing an improvement since 2004. Additionally, IRS estimates that the accuracy rate to taxpayers’ inquiries about their accounts, to be 92.7 percent this year compared to 91.7 percent over same period last year, continuing an improvement since 2003. IRS officials attribute these improvements in performance to several factors, including better and more timely performance feedback for telephone assistors, increased assistor experience, better training, and increased use of the Probe and Response Guide, a script used by telephone assistors to understand and respond to tax law questions. Use of IRS’s Web site has increased so far this filing season compared to prior years based on the number of visits and downloads. From January 1 through February 28, IRS’s Web site was visited 67 million times by visitors who downloaded 56 million forms and publications. The number of visits reflects a 7 percent increase over the same period last year while the number of forms and publications downloaded has increased by 25 percent. Further, IRS’s Web site is performing well. For example, we found IRS’s Web site to be readily accessible, easy to navigate, and an independent weekly study by Keynote, a company that evaluates Web sites, reported that IRS’s Web site has repeatedly ranked second out of 40 government agencies evaluated in terms of average download time. The same study also reported that IRS has repeatedly ranked first out of the most commonly accessed government related Web sites for response time and success rate, and the American Consumer Satisfaction Index overall customer satisfaction with IRS’s Web site increased from 68 to 72 percent after IRS reconfigured the site. IRS reconfigured its Web site for the 2006 filing season. According to IRS officials, the goal for reconfiguring the Web site was to improve overall customer service through easier navigation and a more effective search function. As a result, the number of Web site searches has decreased by 53 percent, from 76 million during the same period last year to 36 million this year. Typically, search functions are used when users fail to find information through links. According to IRS officials, the decrease in the number of searches indicates that users are finding the information that they need faster. IRS also added the following new features to its Web site this year: Electronic IRS: The Electronic IRS brand reconfigured the IRS’s Web site and made it easier to locate items, as evidenced by the decline in searches; Alternative Minimum Tax (AMT) Assistant: Helps taxpayers determine if they do not owe AMT; and Help for Hurricane Victims: A special link that provides victims of the recent hurricanes information on special tax relief, assistance and how to get help with tax matters. IRS’s Web site continues to include several important features in addition to the Free File program: Where’s My Refund, which allows taxpayers to check on the status of their refunds. As of March 20, 2006, 19.8 million taxpayers accessed the Where’s My Refund feature to check on the status of their tax refunds. This was a 21 percent increase from the same period last year; and Electronic Tax Law Assistance, where taxpayers can ask IRS general tax law questions via its Web site. From January 1 through March 20, 2006, IRS received 7,353 emails requesting tax law assistance (down over 32 percent compared to last year). As of February 28, 2006, IRS estimated the accuracy rate of IRS’s responses to tax law questions submitted via the Web site, to be 85 percent down from 88 percent in 2005. However, the average number of days that it took IRS to respond to tax law questions submitted via the Web site improved to 2.4 days, compared to 4 days in 2005. Fewer taxpayers have used IRS’s 400 walk-in sites so far in the 2006 filing season compared to the same period in prior years. Staff at walk-in sites provide taxpayers with information about their tax accounts and answer taxpayers’ questions within a limited scope of designated tax law topics, such as those related to income, filing status, exemptions, deductions, and related credits. Walk-in site staffs also provide need-based tax return preparation assistance, limited to taxpayers meeting certain requirements. As of March 11, 2006, the total number of contacts at IRS’s walk-in sites declined by approximately 12 percent compared to last year. The decline thus far this year is consistent with the annual trends in walk- in use shown in figure 2, including IRS’s projection for 2006. The declines in the number of taxpayers using IRS’s walk-in sites, including for tax return preparation, are also consistent with IRS’s strategy to reduce its costly face-to-face assistance by providing taxpayers with additional options, such as IRS’s toll-free telephone service, Web site, and numerous volunteer sites. It is unclear, however, whether the declining volume is an indicator of how well IRS is meeting taxpayers’ demand for face-to-face assistance. For example, IRS does keep track of the number of taxpayers entering a walk-in site, taking a number to queue for service, but then leaving the site without receiving service. If a taxpayer did not take a number, IRS would have no way of counting those taxpayers. IRS officials said the types of services offered at walk-in sites remained constant for most sites from 2005 to 2006. For sites in areas with a high number of natural disaster victims, IRS expanded the types of assistance provided. For example, IRS eliminated income limits for taxpayers seeking return preparation assistance. In contrast to IRS walk-in sites, the number of taxpayers seeking return preparation assistance at approximately 14,000 volunteer sites has increased this year by 5.6 percent, continuing the trend since 2001 (see fig. 2). These sites, often run by community-based organizations and staffed by volunteers who are trained and certified by IRS, do not offer the range of services IRS provides at walk-in sites, but instead focus on preparing tax returns primarily for low-income and elderly taxpayers and operate chiefly during the filing season. As we have previously reported, the shift of taxpayers from walk-in to volunteer sites is important because it has allowed IRS to transfer time-consuming services, such as return preparation, from IRS to other less costly alternatives that can be more convenient for taxpayers. IRS has used both walk-in and volunteer sites to provide relief efforts for federally-designated disaster zones such as in hurricane-affected areas. IRS developed a Disaster Referral Services Guide and new training materials for employees to better equip them to address disaster-related issues. Also, IRS adjusted the type of tax law questions that it would answer at walk-in sites to include casualty loss and removed income limitations for disaster victims seeking return preparation assistance at walk-in sites. Volunteer sites performed outreach within their network of partners by creating training material for tax practitioners, and agreeing with two organizations to accept referrals from IRS of disaster victims needing tax return preparation assistance. Concerning the quality of services provided at walk-in and volunteer sites, IRS continues to lack reliable and comprehensive data on the quality of the services provided. As in previous years, TIGTA is conducting an audit on the accuracy of some services provided at walk-in sites, although the results will not be available until after the filing season. However, TIGTA has noted problems with the quality of services provided at IRS walk-in sites in prior reports. We have made recommendations for IRS to improve its quality measurement at walk-in sites. At volunteer sites, IRS is conducting different types of reviews to monitor tax return preparation assistance. According to IRS officials, the results to date show that the quality of service has improved at volunteer sites compared to previous years, but they acknowledge that challenges remain in terms of volunteers’ adherence to IRS’s procedures and use of IRS materials. As in previous years, TIGTA will conduct limited quality reviews at volunteer sites. While the results of those reviews are based on a judgmental sample, TIGTA has concluded in the past that, while significant improvements have been made in the oversight of volunteer sites, continued effort is needed to ensure the accuracy of tax return assistance provided. IRS’s fiscal year 2007 budget request is a small decrease compared to 2006 enacted levels after adjusting for expected inflation. It proposes to reduce overall staffing levels, as well as staffing levels for taxpayer service and enforcement activities, while maintaining or improving taxpayer service and enforcement. As it has in prior years, IRS has identified some savings, but additional opportunities exist for enhancing savings. IRS’s proposed fiscal year 2007 budget is $11 billion (a 1.6 percent increase), but after adjusting for expected inflation, it reflects a slight decrease over last year’s enacted budget. The $11 billion includes $417 million from new and existing user fees and reimbursable agreements with other federal agencies. The 2007 budget request for IRS’s appropriation accounts is shown in table 2 (see app. II for more details). The real decrease in the proposed budget can be seen in staffing. IRS proposes to fund 95,476 FTEs in fiscal year 2007, down over 2 percent from 97,754 FTEs in enacted fiscal year 2006 (see table 5 in app. II for comparisons in enacted FTE levels for fiscal years 2002 through 2007). Actual FTEs tend to be lower than enacted FTEs, in part, because of how IRS absorbs unbudgeted costs (see table 6 in app. II for actual FTEs). The decrease in FTEs may be greater than shown in IRS’s fiscal year 2007 budget request. Every year agencies, including IRS, are expected to absorb some costs that are not included in their budget requests. For fiscal year 2007, IRS officials currently anticipate having to absorb over $117 million in costs, including about $41 million for homeland security-related controls over physical access to government facilities. Absorbing such costs reduces the actual number of FTEs that IRS can support. For example, for fiscal year 2005, the enacted level of FTEs was 96,435 but the actual level was 94,282. IRS is requesting $4.2 billion for PAM, including some user fees, which is funding primarily spent on providing service to taxpayers. The amount requested is about a 1.6 percent increase over fiscal year 2006 enacted levels, but is a slight decrease after adjusting for expected inflation. This funding level translates into reduced staffing, down over 4 percent from an enacted level of 38,796 FTEs in fiscal year 2006 to 37,126 proposed FTEs in fiscal year 2007. Since fiscal year 2002, FTEs devoted to PAM have declined over 15 percent from an enacted level of 43,866 FTEs. Despite the proposed inflation-adjusted decrease in funding in 2007, IRS is planning to maintain or improve taxpayer services. For every one of the major taxpayer services listed in the budget, 2007 planned performance goals are higher or equal to 2006 performance goals. These services include telephone assistance and refund issuance. IRS is requesting $4.8 billion for TLE. The 2007 budget request proposes an overall decrease in enforcement FTEs, down over 2 percent to a proposed 49,479 FTEs from last year’s enacted level of 50,559 FTEs. For its three main categories of skilled enforcement staff, IRS is proposing a marginal increase in staffing of 0.2 percent (see fig. 3). For special agents (those who perform criminal investigations), the increase is 1.7 percent. For the other two categories—revenue agents (those who examine complex returns), revenue officers (those who perform field collection work)—IRS is proposing to keep the number of staff the same as in 2006. Despite keeping skilled enforcement staff virtually unchanged, IRS is proposing to maintain or increase its major enforcement activities. For all the major enforcement activities listed in the budget, IRS is establishing goals in 2007 that are higher or equal to 2006 planned performance goals. Major enforcement activities include individual taxpayer examinations, collection coverage, and criminal investigations completed. IRS officials anticipate increased revenue collected and other performance improvements as a result of using data from IRS’s most current compliance research effort, known as the National Research Program (NRP). IRS is requesting about $1.6 billion for IS in fiscal year 2007, which is intended to fund information technology (IT) staff and related costs for activities such as information security and maintenance and operations of its current tax administration systems. Although the number of FTEs proposed in 2007 is up when enacted FTEs are considered, it is virtually the same as the operating level currently assumed in 2006 (see app. II for more details). In 2002, we reported that the agency did not develop its fiscal year 2003 IS operations and maintenance budget request in accordance with the investment management approach used by leading organizations. We recommended that IRS prepare its future budget requests in accordance with these best practices. To address our recommendation, IRS agreed to take a variety of actions, which it has made progress in implementing. For example, IRS planned to develop a capital planning guide to implement processes for capital planning and investment control, budget formulation and execution, business case development, and project prioritization. In August 2005, IRS issued the initial version of its IT Capital Planning and Investment Control (CPIC) Process Guide, which (1) provides executives with the framework within which to select, control, evaluate, and maintain the portfolio of IT investments to best meet IRS business goals and (2) defines the governance process that integrates the agency’s IT investments with the strategic planning, budgeting, and procurement processes. According to IRS officials and documentation, the agency formulated its prioritized fiscal year 2007 IT portfolio and associated budget request, including operations and maintenance requirements, in accordance with this CPIC Process Guide. We will continue to monitor the implementation of IRS’s CPIC process as its IT investment management process matures. In addition, IRS stated that it planned to develop an activity-based cost model to plan, project, and report costs for business tasks/activities funded by the IS budget. During fiscal year 2005, as part of the first release of the Integrated Financial System (IFS), IRS implemented a cost module that is potentially capable of allocating costs by activity. However, agency officials stated that they needed to accumulate 3 years of actual costs to have the historical cost data necessary to provide a basis for meaningful future budget estimates. Since then, according to the Office of the Chief Financial Officer, IRS has (1) populated the cost module with all actual fiscal year 2005 expenses; (2) identified the data needed from IFS to support its budget requests; and (3) developed a system to capture, test, and analyze the cost data to devise a standard methodology to provide the necessary data from the cost module. Once the pilot results and recommendations have been reviewed, an implementation plan will be developed. IRS still expects to have the requisite 3 years of historical cost data available in time to support development of the fiscal year 2010 budget request. Although IRS has made progress in implementing best practices in developing its IS operations and maintenance budget, until IRS completes the actions necessary to fully implement the activity–based cost module, the agency will not be able to ensure that its request is adequately supported. BSM is a high-risk, highly complex effort that involves developing and delivering a new set of information systems that are intended to replace the agency’s aging tax processing and business systems. The program is critical to supporting IRS’s taxpayer service and enforcement goals. For example, BSM includes projects to allow taxpayers to file and retrieve information electronically and to provide technology solutions to help reduce the backlog of collections cases. It also helps IRS considerably in providing the reliable and timely financial management information needed to account for the nation’s largest revenue stream and better enable the agency to both determine and justify its resource allocation decisions and budget requests. IRS’s fiscal year 2007 budget request of $167.3 million for the BSM program reflects a reduction of about 15 percent (and even greater when adjusted for expected inflation), or about $30 million, from the enacted fiscal year 2006 budget of $197 million. Since our testimony before this subcommittee on last year’s budget request, IRS has made further progress in implementing BSM, although some key projects did not meet short-term cost and schedule commitments. During 2005 and the beginning of 2006, IRS deployed additional releases of several modernized systems that have delivered benefits to taxpayers and the agency, including CADE, e-Services (a new Web portal and electronic services for tax practitioners), and Modernized e-File (a new electronic filing system). While three BSM project releases were delivered within the cost and/or schedule commitments presented in the fiscal year 2005 expenditure plan, others experienced cost increases or schedule delays. For example, two IFS and Modernized e-File project releases experienced cost increases of 93 percent and 29 percent, respectively. As we have previously reported, the BSM program has had a history of cost increases and schedule delays that have been due, at least in part, to deficiencies in various management controls and capabilities that have not yet been fully corrected. IRS is in the process of implementing our prior recommendations to correct these deficiencies. IRS has identified significant risks and issues that confront future planned system deliveries. For example, according to IRS, schedule delays and contention for key resources between multiple releases of CADE necessitated the deferral of some functionality. The deferral of these requirements may negatively impact the cost and schedule for two important releases, which are planned to be deployed later this year. The agency, however, recognizes the potential impact of these project risks on its ability to deliver planned functionality within cost and schedule estimates, and to its credit, has developed mitigation strategies to address them. IRS has also made additional progress in addressing high-priority BSM program improvement initiatives during the past year, including initiatives related to shifting the role of systems integrator from the prime contractor to IRS. IRS’s program improvement process appears to be an effective means of assessing, prioritizing, and addressing BSM issues and challenges. However, much more work remains for the agency to fully address these issues and challenges. In addition, in response to our prior recommendation, IRS is developing a new Modernization Vision and Strategy to address BSM program changes and provide a modernization roadmap. According to the Associate Chief Information Officer for BSM, the agency’s new strategy focuses on promoting investments that provide value in smaller, incremental releases that are delivered more frequently, with the goal of increasing business value. IRS is currently finalizing a high-level vision and strategy as well as a more detailed 5-year plan for the BSM program. We believe these actions represent sound steps toward addressing our prior recommendation to fully revisit the vision and strategy and develop a new set of long-term goals, strategies, and plans consistent with the budgetary outlook and with IRS’s management capabilities. While the requested fiscal year 2007 BSM budget will allow IRS to continue the development and deployment of the CADE, Modernized e- File, and Filing and Payment Compliance (F&PC) projects, the proposed reduced funding level would likely affect the agency’s ability to deliver the functionality planned for the fiscal year and could result in project delays and/or scope reductions. This could, in turn, impact the long-term pace and cost of modernizing tax systems and of ultimately improving taxpayer service and strengthening enforcement. For example, according to IRS documents, the agency had planned to spend $85 million in fiscal year 2007 to develop and deploy additional CADE releases that would enable the system to process up to 50 million individual tax returns by the 2008 filing season and issue associated refunds faster. However, with a proposed budget of $58.5 million—over 30 percent less than anticipated— IRS would likely have to scale back its planned near-term work on this project. In addition, the reductions to the planned budgets for the Modernized e-File and F&PC projects may also result in IRS having to redefine the scope and/or reassess schedule commitments for future project releases. The proposed BSM budget reduction would also significantly reduce the amount allotted to program management reserve by about 82 percent (from $13 million in fiscal year 2006 to $2.3 million in fiscal year 2007). If BSM projects have future cost overruns that cannot be covered by the depleted reserve, this reduction could result in increased budget requests in future years or delays in planned future activities. While the BSM program still faces challenges, IRS has recently made progress in delivering benefits and addressing project and program-level risks and issues. Reducing BSM funds at a time when benefits to taxpayers and the agency are being delivered could adversely impact the momentum gained from recent progress and result in delays in the delivery of future benefits. However, until IRS addresses our prior recommendation by clearly defining its future goals for the BSM program as well as the impact of various funding scenarios on meeting these goals in its new Modernization Vision and Strategy, the long-term impact of the proposed budget reduction is unclear. In its 2007 budget request, IRS identified savings as it has done in prior years and plans to redirect some of those savings to front-line taxpayer service and enforcement activities. IRS is proposing to save over $121 million and 1,424 FTEs by, for example, automating the process of providing an individual taxpayer identification number to those taxpayers ineligible for a Social Security number and improving data collection techniques and work processes for enforcement activities through increased financial reporting requirements and scanning and imaging techniques. IRS’s history of realizing savings proposed in past budget requests provides some confidence that the agency will be able to achieve savings in fiscal year 2007. For example, IRS reported it realized 88 percent of the anticipated dollar savings and 86 percent of the anticipated staff savings identified in the fiscal year 2004 budget request. IRS also reported exceeding the savings targets in the fiscal year 2005 budget request (see app. III). In addition to the areas identified by IRS in its budget request, there may be additional opportunities for efficiency gains. Increasing electronic filing: In an era of tight budgets, continued growth in electronic filing may be necessary to help fund future performance improvements. One proposal for continuing to increase electronic filing is additional use of electronic filing mandates. Currently, IRS mandates electronic filing for large corporations. The 2007 budget request proposes a legislative change that would expand its authority to require electronic filing for businesses. Moreover, 12 states now mandate electronic filing for certain classes of tax practitioners (see app. IV for more information on state mandates). As we have reported, although there are costs and burdens likely to be associated with electronic filing mandates for paid tax preparers and taxpayers, state mandates have generated significant increases in electronic filing. IRS has an electronic filing strategy, which the agency is updating. Changing the menu of taxpayer services: IRS currently lacks a comprehensive strategy explaining how its various taxpayer services (including its telephone, walk-in, volunteer, and Web site assistance) will collectively meet taxpayer needs. In response to a Congressional directive, IRS is developing such a strategy. The strategy is important because some taxpayers may not be well served by the current service offerings. IRS’s attempts to reduce some taxpayer services, namely reducing the hours of telephone operations and closing some walk-in sites, have met with resistance from the Congress. Although congressional directives to study the impact of IRS’s actions exist, we still believe there may be opportunities to adjust IRS’s menu of services to reduce costs, without affecting IRS’s ability to meet taxpayers’ needs. Consolidating telephone call sites: IRS operates 25 call sites throughout the country. Consistent with earlier plans, IRS closed two of its smallest call sites—Chicago and Houston—in March 2006, to realize savings in its toll-free telephone operations. Also, IRS has gained efficiencies from using a centralized call router located in Atlanta. As a result, there are currently more than 850 workstations that are not being used; consequently, IRS may have the potential to close several additional call sites. Consolidations would not affect telephone service and would be invisible from the taxpayer’s perspective. Managing a federal agency as large and complex as IRS requires managers to constantly weigh the relative costs and benefits of different approaches to achieving the goals mandated by the Congress. Management is constantly called upon to make important long-term strategic as well as daily operational decisions about how to make the most effective use of the limited resources at its disposal. As constraints on available resources increase, these decisions become correspondingly more challenging and important. In order to rise to this challenge, management needs to have current and accurate information upon which to base its decisions, and to enable it to monitor the effectiveness of actions taken over time so that appropriate adjustments can be made as conditions change. In its ongoing effort to make such increasingly difficult resource allocation decisions and defend those decisions before the Congress, IRS has long been hampered by a lack of current and accurate information concerning the costs of the various options being considered. Instead, management often has relied on a combination of the limited existing cost information; the results of special analysis initiated to establish the full cost of a specific, narrowly defined task or item; and estimates based on the best judgment of experienced staff. This has impaired IRS’s ability to properly decide which, if any, of the options at hand are worth the cost relative to the expected benefits. For example, accurate and timely cost information may help IRS consider changes in the menu of taxpayer services that it provides by identifying and assessing the relative costs, benefits, and risks of specific projects. Without reliable cost information, IRS’s ability to make such difficult choices in an informed manner is seriously impaired. The lack of reliable cost information also means that IRS cannot prepare cost-based performance measures to assist in measuring the effectiveness of its programs over time. Further, IRS does not have the capability to develop reliable information on the return on investment for each category of taxpayer service and enforcement. IRS lacks reliable information on both the return from services (the additional revenue collected by helping taxpayers understand their tax obligations) and the investment or cost of the services. While developing return on investment information is difficult, the cost component of that equation may be the least complex to develop. Having such cost information is a building block for developing return on investment estimates. For its enforcement programs, IRS has developed a rough measure of return on investment in terms of tax revenue that is directly assessed from uncovering noncompliance. Continuing to develop return on investment measures could help officials make more informed decisions about allocating resources. The new NRP data, for example, are to be used to better identify which tax returns to examine so that fewer compliant taxpayers are burdened by unnecessary audits and IRS can increase the amount of noncompliance that is addressed through its enforcement activities. Even without return on investment information, cost information can help IRS determine if, for example, IRS should change the menu of services provided. As discussed in the BSM section, in fiscal year 2005, IRS implemented a cost accounting module as part of IFS. However, while this module has much potential and has begun accumulating cost information, IRS has not yet determined what the full range of its cost information needs are or how best to tailor the capabilities of this module to serve those needs. Also, IRS does not have an integrated workload management system which would provide the cost module with detailed allocation of personnel cost information. In addition, as noted in developing its IS budget, because it generally takes several years of historical cost information to support meaningful estimates and projections, IRS cannot yet rely on IFS as a significant planning tool. It will likely require several years, implementation of additional components of IFS, and integration of IFS with IRS’s tax administration activities before the full potential of IFS’s cost accounting module will be realized. Furthermore, IRS’s fiscal year 2007 BSM budget request does not include funding for additional releases of IFS. In the interim, IRS decision making will continue to be hampered by inadequate underlying cost information. For the first time, IRS’s budget request sets long-term goals aimed at reducing the tax gap, although IRS does not have a data-based plan for achieving the goals. However, because of its persistence, reducing the tax gap requires solutions which go beyond funding and staffing for IRS. IRS established two agencywide, long-term performance goals, as shown in table 3. IRS plans to improve voluntary compliance from 83 percent in 2005 to 85 percent by 2009, and reduce the number of taxpayers who think it is acceptable to cheat on their taxes from 10 percent in 2005 to less than 9 percent in 2010. According to IRS, these are the first in a series of quantitative goals that will link to its three strategic goals—improve taxpayer service, enhance tax law enforcement, and modernize IRS through technology and processes. These goals will be challenging to meet, because for three decades, IRS has consistently reported a persistent, relatively stable tax gap. Although IRS has made a number of changes in its methodologies for measuring the tax gap, which makes comparisons difficult, regardless of methodology used, the voluntary compliance rate that underpins the gap has tended to range from around 81 percent to around 84 percent. Because of a lack of quantitative estimates of how changes to its service and enforcement programs affect compliance, IRS is unable to show in a data-based plan how it will use those programs to reach the two long-term goals shown in table 3. If IRS could quantify the impact of its service and enforcement programs on the compliance rate or attitudes towards cheating, it could use the information to show the kinds of changes to the programs needed to achieve the long-term goals and how best to direct resources towards achieving those goals. Unfortunately, quantifying the impact of IRS’s service and enforcement programs on compliance or cheating is very challenging. The type of data needed to make such a link does not currently exist, and may not be easy to collect. Lacking such quantitative estimates, IRS must take a more qualitative approach in its plans for increasing compliance, which would likely also involve changing attitudes towards cheating. IRS’s overall approach to reducing the tax gap consists of improving service to taxpayers and enhancing enforcement of the tax laws. We recently reported that IRS has taken a number of steps that may improve its ability to reduce the tax gap. Favorable trends in staffing of IRS enforcement personnel; examinations performed through correspondence, as opposed to more complex face-to-face examinations; and the use of some enforcement sanctions such as liens and levies are encouraging. Also, IRS has made progress with respect to abusive tax shelters through a number of initiatives and recent settlement offers that have resulted in billions of dollars in collected taxes, interest, and penalties. Finally, IRS has continually improved taxpayer service by increasing, for example, the accuracy of responses to tax law questions. The effect of this overall approach and the 2007 budget proposal will have on voluntary compliance has not been quantified by IRS. Therefore, the Congress will have to rely on the IRS Commissioner for qualitative explanations, of why, in his judgment, IRS’s mix of taxpayer service and enforcement and overall approach for reducing the tax gap, including the 2007 budget proposal, will be sufficient to start IRS on a path towards achieving its long-term goals. More specifically, such explanations could include a clear statement of which service and enforcement programs have priorities for expansion because they are expected to contribute the most to increasing the compliance rate and the evidence that supports that judgment. In addition, IRS lacks a plan for measuring progress towards one goal— improving voluntary compliance. IRS plans to measure progress towards the second goal—reducing the percentage of taxpayers who think it is acceptable to cheat—via the IRS Oversight Board’s annual Taxpayer Attitude Survey. Nevertheless, IRS recently estimated voluntary compliance as part of the NRP study, which reviewed the compliance of a random sample of individual taxpayers and used those results to estimate compliance for the population of all taxpayers. The study took several years to plan and execute. In addition to providing an estimate of the compliance rate, the study’s results will be used to better target IRS’s audits of potentially non- compliant taxpayers. Better targeting reduces the burden on taxpayers because IRS is better able to avoid auditing compliant taxpayers. At this time, however, IRS has not made plans to repeat the study in time to measure compliance by 2009. Furthermore, doing compliance studies once every few years does not give IRS or others information about what is happening in the intervening years. Annual estimating of the compliance rate could provide information that would enable IRS management to adjust plans as necessary to help achieve the goal in 2009. One option that would not increase the cost of estimating compliance would be to use a rolling sample. IRS Oversight Board officials and we agree that instead of sampling, for example, once every 5 years, one-fifth of the sample could be collected every year. The total sample could include 5 years worth of data—with each passing year the oldest year would be dropped from the sample and the latest year added. The availability of current research data would allow IRS to more effectively focus its service and compliance efforts. For years, we have reported that tax law enforcement is a high-risk area, in part because of the size of the gross estimated tax gap, which IRS most recently estimated to be $345 billion for tax year 2001. IRS estimated it would recover around $55 billion through late payments and enforcement revenue, resulting in a net tax gap of around $290 billion. Reducing the tax gap would yield significant revenue and even modest progress, such as a 1 percent reduction, would likely yield nearly $3 billion annually. In recent years, IRS reported increases in enforcement revenue—revenue brought in as a result of IRS taking enforcement action. Between fiscal years 2003 and 2005, IRS reported that enforcement revenue grew from $37.6 billion to $47.3 billion, with a level of $48.1 billion estimated for 2006. However, the voluntary compliance rate has persisted at a relatively stable level. We have reported that significant reductions in the tax gap will likely require exploring new and innovative solutions. Such solutions that may not require significant additional IRS resources, but are nonetheless difficult to achieve, include simplifying the tax code to make it easier for individuals and businesses to understand and comply with their tax obligations; increasing tax withholding for income currently not subject to withholding; improving information reporting; and leveraging technology to improve IRS’s capacity to receive and process tax returns. IRS’s 2007 budget request includes five new legislative proposals to address some of these solutions to reduce the tax gap, along with a proposal to study independent contractor compliance that would not require additional resources. In recent testimony, the IRS Commissioner stated that the amount of enforcement revenue IRS expects from the legislative proposals will be $3.6 billion over the next 10 years (about 0.1 percent of the tax gap). However, the proposals should also increase revenue voluntarily paid without any IRS enforcement actions. The amount of that revenue is uncertain. The IRS Commissioner recognizes the implications of the tax gap and states in the budget that addressing it is a top priority. Although IRS’s 2007 budget request does not propose allocating IRS resources to new initiatives to reduce the tax gap, according to IRS officials, they plan to continue initiatives identified in prior budgets. For example, IRS has two ongoing BSM projects—F&PC and Modernized e-File—which, according to IRS’s Associate Chief Information Officer for BSM, could help reduce the tax gap. F&PC is expected to increase IRS’s capacity to resolve the growing backlog of delinquent taxpayer cases and increase collections, while Modernized e-File is expected to help make it easier for IRS to process tax returns, look for irregularities, and track down unpaid taxes. The budget request states that the administration will study the standards used to distinguish between employees and independent contractors for purposes of paying and withholding income taxes. We have long supported efforts aimed at improving independent contractor compliance. Past IRS data have shown that independent contractors report 97 percent of the income that is reported on information returns to IRS, while contractors that do not receive these information returns report only 83 percent of income. We have also identified other options for improving information reporting by independent contractors, including increasing penalties for failing to file required information returns, lowering the $600 threshold for requiring such returns, and requiring businesses to separately report on their tax returns the total amount of payments to independent contractors. We previously reported that clarifying the definition of independent contractors and extending reporting requirements for those contractors could possibly increase tax revenue by billions of dollars. Two of the legislative proposals call for more information reporting on payment card transactions from certain businesses and on payments by federal, state, and local governments to businesses. Information reporting has been shown to significantly reduce noncompliance. Although information reporting is highly effective in encouraging compliance, such reporting imposes costs and burdens on the businesses that implement it. However, information reporting is a way to significantly increase voluntary compliance without increasing IRS’s budget. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee my have at this time. For further information regarding this testimony, please contact James R. White, Director, Strategic Issues, on 202-512-9110 or [email protected] David A. Powner, Director, Information Technology Management Issues, on 202-512-9296 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Joanna Stamatiades, Assistant Director; Amanda Arhontas; Paula Braun; Terry Draver; Paul Foderaro; Chuck Fox; Tim Hopkins; Kathryn Horan; Hillary Loeffler; Sabine Paul; Cheryl Peterson; Neil Pinney; Steve Sebastian; Tina Younger. In 2002, Internal Revenue Service (IRS) entered into a 3-year agreement with the Free File Alliance, a consortium of 20 tax preparation companies to provide free electronic filing to taxpayers who access any of the companies via a link on IRS’s Web site. The 2002 Free File Agreement stated that as part of the agreement, IRS would not compete with the Consortium in providing free, online tax return preparation and filing services to taxpayers. IRS and the Consortium amended the agreement in 2005. Key differences between the two agreements are: the new income limitation of $50,000 and new language in the amendment that states the Alliance members must disclose early on if state tax return services are available, and if so, whether a fee will be charged for such services; and provide the necessary support to accomplish a customer satisfaction survey. It also added language pertaining to the marketing and offering of Refund Anticipation Loans (RALs) whereby: No offer of free return preparation and filing of an electronic return in the free file program shall be conditioned on the purchase of a RAL; and RALs will be offered with clear language indicating, for example, that RALs are loans, not a faster way of receiving an IRS refund; must be repaid even if the IRS does not issue a full refund; are short-term loans interest rates may be higher and customers may wish to consider using other forms of credit; and may be offered but not promoted. IRS tests each Consortium member’s software to ensure it is in accordance with the Free File provisions, including those cited previously, before allowing a link to IRS’s Web site. In addition, IRS officials monitor complaints about the Free File program received via IRS.gov, including allegations regarding false, deceptive, or misleading information or advertising. While IRS does not track the number of complaints it receives, according to IRS officials, most of the complaints received thus far were a result of the taxpayer either not carefully reading or following instructions, or incorrectly entering information. GAO conducted limited testing of the Free File program and found that the Consortium members were complying with the terms outlined in the amended Free File agreement pertaining to RALs. The amended Free File agreement contains provisions that enable IRS to monitor taxpayer participation beginning in the 2006 filing season, unlike prior years where Free File Alliance members self-reported filing figures. IRS also tracks the number of free file users who are accepting any financial products, such as RALs. As of March 16, IRS reported that 163,000 Free File returns accepted financial products. This represents 5.6 percent of all returns filed through the Free File program. The number of taxpayers using free file to electronically file their individual income tax returns has increased steadily from 2.8 million in 2003, to 3.5 million in 2004, to 5.1 million in 2005. The substantial growth between 2004 and 2005 was due to, in part, several Consortium members offering free filing to all taxpayers through the free file program regardless of their income in 2005. However, according to IRS officials, the lack of income limitation created conflict among Consortium members as it put pressure on all Alliance members to offer free service, which may not have been economically feasible for some, threatening competition if members were to drop out of the Alliance. IRS projected that 6.1 million taxpayers would use free file in 2006. However, this projection may be optimistic, because between January 1 and March 19, IRS has reported receiving only 2.9 million free file returns compared to 3.8 million during the same period last year, a decline of 23 percent. According to IRS officials, contributing factors to this decline are, in part, due to decreased press attention and advertising by the participating companies and the income limitation. The income limitation provides coverage to 70 percent of the nation’s taxpayers, or more than 92 million people. This coverage includes taxpayers with an adjusted gross income of $50,000 or less. For fiscal year 2007, the Internal Revenue Service (IRS) has requested $10.7 billion in its appropriation accounts. This request consists of $10.6 billion in direct appropriations and $135 million in revenue from new user fees, which IRS will commit to taxpayer service activities in its Processing, Assistance, and Management (PAM), Tax Law Enforcement (TLE), and Information System (IS) accounts. In addition, IRS is projecting to collect and use $282 million from existing user fees and reimbursable agreements with states and other federal agencies. This brings IRS’s proposed fiscal year 2007 budget to approximately $11 billion (a 1.6 percent increase over fiscal year 2006). After adjusting for expected inflation, IRS’s $11 billion budget request reflects a slight decrease from last year’s enacted budget. IRS’s enacted budgets for its appropriation accounts from fiscal years 2002 through 2007 are shown in table 4. IRS’s enacted budget has increased almost 8 percent since fiscal year 2002. By far, the biggest percentage increase has been TLE—almost 21 percent—and is reflective the shift in resources devoted to TLE from PAM during this period. The biggest percentage decrease was in the Business Systems Modernization (BSM) program, down almost 58 percent. Tables 5 and 6 show IRS’s enacted and actual Full-time Equivalents (FTEs) for fiscal years 2002 through 2007. Overall, actual FTEs tend to be lower than enacted FTEs due in part to the way IRS funds its unbudgeted requirements. When both enacted and actual FTEs are considered, FTEs for PAM have steadily decreased and, for the most part, FTEs for TLE have increased since fiscal year 2002. However, steady trends are not apparent when comparing enacted and actual FTEs in IRS’s IS account. For example, when enacted FTEs are considered, IS staffing appears to fluctuate up and down between fiscal years 2002 through 2007; yet, when actual FTEs are considered, IS staffing decreased from fiscal year 2002 through 2005 and increased from fiscal years 2005 to 2006. IRS officials attribute these fluctuations in FTEs to reorganizations and other factors. Tables 5 and 6 also show significant differences in percentage changes between enacted and actual FTEs in some of IRS’s appropriations accounts from fiscal years 2006 to 2007. The enacted level of FTEs is the number IRS projected it could support given the level of funding the Congress enacted. According to IRS officials, enacted levels tend to be overstated compared to actual FTEs for several reasons. First, IRS, like most federal agencies, does not receive its budgets when expected and cannot fill all positions. Also, as the costs of maintaining current FTE levels increase annually, IRS is not able to realize all of the FTEs it projects to fund with the appropriations the Congress enacts. In its fiscal year 2006 budget request, IRS showed its budget distributed by taxpayer services and enforcement, including IS funding for those areas, because the agency’s current appropriation accounts are not divided clearly between taxpayer service and enforcement. As table 7 shows, funding for enforcement increased 15 percent between fiscal years 2004 and 2007 to $6.96 billion, while funding for taxpayer service declined over 3 percent to almost $3.6 billion. In its 2007 budget request, the Internal Revenue Service (IRS) is proposing to save over $121 million and 1,424 Full-time Equivalents (FTEs) and reinvest over $12 million and 11 FTEs. Based on IRS’s ability to achieve prior year savings and reinvestments as shown in table 8, we have a basis to believe that IRS will achieve most, if not all, of these savings. For example, IRS reported it realized 88 percent of its anticipated budget savings and 86 percent of its anticipated staff savings for savings identified in its fiscal year 2004 budget request, and IRS reported exceeding savings targets in fiscal year 2005. Of the 50 states, 12 have electronic filing mandates for tax practitioners in effect for the 2006 filing season (see fig. 4). The mandates differ in their implementation dates and schedules, thresholds for filing, and penalties. The differences between mandates may affect the magnitude of electronic filing increases in each state. We recently reported that state mandates encourage electronic filing of federal tax returns and recommended that IRS develop better information about the costs to tax practitioners and taxpayers of mandatory electronic filing of tax returns for certain categories of tax practitioners. These mandates require tax practitioners who meet certain criteria, such as filing 100 individual state tax returns or more, to file individual state returns electronically. Between tax years 2001 and 2004, electronic filing had grown in the 9 states with mandates from an average of 36.7 percent to 56.8 percent, or an increase of over 20 percentage points, compared to an increase of 14 percentage points for the 41 non mandated states over the same time period. We expect this trend to continue as 3 additional states—New York, Utah and Connecticut—implemented mandates in time for the 2006 filing season. Of these 3 states, New York may have the most to gain because it currently has the lowest rate of electronic filing rate, with fewer than 38 percent of its nearly 9 million federal individual income tax returns electronically filed last year. Tax Administration: IRS Improved Some Filing Season Services, but Long-Term Goals Would Help Manage Strategic Trade-offs, GAO-06-51 Washington, D.C.: November 14, 2005. Tax Administration: IRS Improved Performance in the 2004 Filing Season, but Better Data on the Quality of Some Services Are Needed, GAO-05-67 Washington, D.C.: November 10, 2004. Tax Administration: IRS’s 2003 Filing Season Performance Showed Improvements, GAO-04-84 Washington, D.C.: October 31, 2003. Internal Revenue Service: Assessment of Fiscal Year 2006 Budget Request, GAO-05-566 Washington, D.C.: April 27, 2005. Internal Revenue Service: Assessment of Fiscal Year 2006 Budget Request and Interim Results of the 2005 Filing Season, GAO-05-416T Washington, D.C.: April 14, 2005. Internal Revenue Service: Assessment of Fiscal Year 2005 Budget Request and 2004 Filing Season Performance, GAO-04-560T Washington, D.C.: March 30, 2004. Tax Gap: Making Significant Progress in Improving Tax Compliance Rests on Enhancing Current IRS Techniques and Adopting New Legislative Actions, GAO-06-453T Washington, D.C.: February 15, 2006. Tax Gap: Multiple Strategies, Better Compliance Data, and Long-Term Goals Are Needed to Improve Taxpayer Compliance, GAO-06-208T Washington, D.C.: October 26, 2005. Tax Compliance: Reducing the Tax Gap Can Contribute to Fiscal Sustainability but Will Require a Variety of Strategies, GAO-05-527T Washington, D.C.: April 14, 2005. Taxpayer Information: Data Sharing and Analysis May Enhance Tax Compliance and Improve Immigration Eligibility Decisions, GAO-04- 972T Washington, D.C.: July 21, 2004. Compliance and Collection: Challenges for IRS in Reversing Trends and Implementing New Initiatives, GAO-03-732T Washington, D.C.: May 7, 2003. Financial Audit: IRS’s Fiscal Years 2005 and 2004 Financial Statements, GAO-06-137 Washington, D.C.: November 10, 2005. Internal Revenue Service: Status of Recommendations from Financial Audits and Related Financial Management Reports, GAO-05-393 Washington, D.C.: April 29, 2005. Financial Audit: IRS’s Fiscal Years 2004 and 2003 Financial Statements, GAO-05-103 Washington, D.C.: November 10, 2004. Internal Revenue Service: Status of Recommendations from Financial Audits and Related Financial Management Reports, GAO-04-523 Washington, D.C.: April 28, 2004. Financial Audit: IRS’s Fiscal Years 2003 and 2002 Financial Statements, GAO-04-126 Washington, D.C.: November 13, 2003. Business Systems Modernization: Internal Revenue Service’s Fiscal Year 2006 Expenditure Plan, GAO-06-360 Washington, D.C.: February 21, 2006. Business Systems Modernization: Internal Revenue Service’s Fiscal Year 2005 Expenditure Plan, GAO-05-774 Washington, D.C.: July 22, 2005. IRS Modernization: Continued Progress Requires Addressing Resource Management Challenges, GAO-05-707T Washington, D.C.: May 19, 2005. Business Systems Modernization: IRS’s Fiscal Year 2004 Expenditure Plan, GAO-05-46 Washington, D.C.: November 17, 2004. Business Systems Modernization: Internal Revenue Service Needs to Further Strengthen Program Management, GAO-04-438T Washington, D.C.: February 12, 2004. IRS Modernization: Continued Progress Necessary for Improving Service to Taxpayers and Ensuring Compliance, GAO-03-796T Washington, D.C.: May 20, 2003. Tax Administration: IRS Can Improve Its Productivity Measures by Using Alternative Methods, GAO-05-671 Washington, D.C.: July 7, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government, GAO-05-325SP Washington, D.C.: February 2005. High Risk Series: An Update, GAO-05-207 Washington, D.C.: January 21, 2005. Internal Revenue Service: Challenges Remain in Combating Abusive Tax Schemes, GAO-04-50 Washington, D.C.: November 19, 2003. Tax Administration: IRS Is Implementing the National Research Program as Planned, GAO-03-614 Washington, D.C.: June 16, 2003. Tax Administration: IRS Needs to Further Refine Its Tax Filing Season Performance Measures, GAO-03-143 Washington, D.C.: November 22, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Internal Revenue Service's (IRS) filing season performance affects tens of millions of taxpayers who expect timely refunds and accurate answers to their tax questions. IRS's budget request is a planning tool showing how it intends to provide taxpayer service and enforce the tax laws in 2007. It is also the first in a series of annual steps that will determine whether IRS meets its new long-term goals of increasing tax compliance and reducing taxpayers' acceptance of cheating on their taxes. Tax law enforcement remains on GAO's list of high-risk federal programs, in part, because of the persistence of a large tax gap. IRS recently estimated the gross tax gap, the difference between what taxpayers owe and what they voluntarily pay, to be $345 billion for 2001. GAO assessed (1) IRS's interim 2006 filing season performance; (2) the budget request; and (3) how the budget helps IRS achieve its long-term goals. GAO compared performance and the requested budget to previous years. IRS has improved its filing season performance so far in 2006, continuing a trend. More refunds were directly deposited, which is faster and more convenient. Electronic filing continued to grow, but at a slower rate than in previous years. IRS's two most commonly used services--telephone and Web site assistance--continued to improve. IRS estimates that the accuracy rate for its telephone answers is now 90 percent or more. Taxpayers continued the recent pattern of using IRS's walk-in sites less and community based volunteer sites more. The 2007 budget request of $11 billion, a small decrease after adjusting for inflation, sets performance goals for service and enforcement that are all equal to or higher than the 2006 goals. The budget reduces funding by 15 percent for Business Systems Modernization, the ongoing effort to replace IRS's aging information systems. The reduction could impede progress delivering improvements to taxpayers. The budget request identifies over $121 million in savings; however, opportunities exist for further savings. For example, IRS officials told us that IRS's 25 call centers have underutilized space. Those centers could be consolidated without affecting service to taxpayers. Achieving IRS's long-term compliance goals will be challenging because the tax gap has persisted for many years at about its current level. In addition, because the effect of taxpayer service and enforcement on compliance has never been quantified, IRS does not have a data-based plan demonstrating how it will achieve its goals. Nor does IRS have a plan for measuring compliance by 2009, the date for achieving the goals. Reducing the tax gap will likely require new and innovative solutions such as simplifying the tax code, increasing income subject to withholding, and increasing information reporting about income.
The nation’s federal surface transportation investment policy has become increasingly complex, changing from a narrow focus on completing the nation’s interstate highway system to a broader emphasis on maintaining and more efficiently operating our highways, supporting mass transit, protecting the environment, and encouraging innovative technologies. With the interstate system largely completed in the 1980s—and continuing with the passage of ISTEA in 1991 and TEA-21 in 1998—the federal government has shifted its focus toward preserving and enhancing the capacity of the transportation system by supporting a large network of highway, mass transit, and other surface transportation programs and projects. The funding for transportation plans and projects comes from a variety of sources including federal, state, and local governments; special taxing authorities and assessment districts; and user fees and tolls. While metropolitan areas receive transportation funds from state and local sources, the federal government also is a significant funding source, using revenues from federal highway tax receipts and supplemented by general fund revenues. ISTEA and TEA-21 continued the use of the federal Highway Trust Fund—which is divided into a Highway Account and Mass Transit Account—as the mechanism to account for federal highway user tax receipts that fund various surface transportation programs. The Federal Highway Administration (FHWA) distributes highway program funds to state transportation departments that, in turn, allocate the funds to urban and rural areas on the basis of local priorities and needs. The Federal Transit Administration (FTA) sends most urban transit funds directly to local transit operators while state transportation departments administer rural transit funds. In some cases, Congress may designate specific transportation projects for funding. For example, TEA-21 allocated $9.4 billion over 6 years to 1,850 congressionally designated projects. Finally, ISTEA and TEA-21 also allowed the use of certain federal highway program funds for either highway or transit projects, referred to as flexible funding. Key issues—such as traffic congestion, air pollution, land use and sprawl, the economic viability of neighborhoods and commercial areas, and facilitating national economic growth—are significantly affected by decisions about how federal transportation funds are spent. These decisions grow out of an overall transportation planning and decision- making process involving states, MPOs, local governments, and other stakeholders. Federal laws and requirements specify an overall approach for transportation planning agencies to use in planning and deciding on projects. State, regional, and local government agencies must operate within these requirements to receive federal funds. The laws and requirements—which include ISTEA, TEA-21, and their associated regulations—establish certain requirements governing the way states and local governments plan and decide upon transportation projects. In particular, the requirements describe various planning tasks that states and MPOs must perform, including (1) involving a wide range of stakeholders in the process; (2) identifying overall goals and objectives and data to support transportation investment choices; (3) developing long- and short-range transportation programs and plans; (4) specifying financing for the transportation programs and projects; and (5) ensuring that the transportation planning and decision-making process reflects a variety of planning factors, such as environmental concerns. States and MPOs must consider a wide range of planning factors laid out in federal statutes and regulations. However, federal planning requirements generally do not provide specific guidance on how transportation planners should evaluate these factors. ISTEA and TEA-21 provided stakeholders with greater control over transportation decisions in their own regions than was done in the past and recognized that multiple agencies were responsible for planning, operating, and maintaining the entire transportation system. For this reason, the laws established a planning process that emphasizes cooperation and coordination among transportation stakeholders in the investment decision-making process. To achieve this goal, both ISTEA and TEA-21 sought to strengthen planning practices and coordination between states and metropolitan areas and between the private and public sectors and to improve connections between different forms of transportation. To foster involvement by all interested parties, states and MPOs are expected to provide opportunities for notice and public involvement throughout the planning and project selection process. For stakeholders and other interested parties (see table 1), federal regulations require a formal public involvement process that includes reasonable access to technical and policy information used in developing transportation plans as well as adequate periods for public comment. State departments of transportation—working with transportation organizations, local governments, and the public—develop state transportation goals and plans. Local governments, such as cities and counties, and regional entities, such as MPOs, carry out additional transportation planning and project implementation functions, especially for highway projects. Transit agencies, in addition to operating transit services such as bus, subway, light rail, commuter railroad, and other forms of mass transit, also plan and implement capital projects. Other organizations, such as nonprofit, environmental, and community organizations, are involved in transportation decision-making through the public participation process. Finally, private sector firms also may participate as advisors in the planning and decision-making process, especially when public decisions directly affect their interests. MPOs, which are regional transportation policy bodies composed of representatives from various governmental and other organizations, are key players in the coordination of transportation plans and projects. MPOs are designed to provide a setting for impartial transportation decision- making by facilitating evaluation of alternatives, development of long- and short-range planning documents, and public involvement. In particular, MPOs provide the forum for the various providers of transportation facilities to come together to develop a more comprehensive approach to meeting regional transportation needs. Finally, DOT oversees state and metropolitan transportation planning and provides advice and training on transportation issues. In initiating the transportation planning process, states and MPOs are expected to have a long-term vision that articulates broad goals for the future of the transportation systems in the state or region. DOT guidance states that in developing the long-term vision, states and regions are to consider several factors, including projected population growth and economic changes, current and future transportation needs, maintenance and operation of existing transportation facilities, preservation of the human and natural environment, and projected land uses. States and MPOs may also conduct investment and planning studies to identify major transportation corridors in the state or region. In deciding which proposed transportation projects meet the needs and reflect the long-range vision of the state or region, states and MPOs are required to establish a process for collecting and analyzing data to evaluate different transportation alternatives and using the resulting information to establish priorities for improving the area’s transportation assets. As part of this process, transportation planners may develop performance measures and transportation models to evaluate existing or proposed projects. Performance measures are important indicators of how well the transportation system is operating. Some examples of user- oriented performance measures are average trip travel time, length of delay, and reliability of trip making. Transportation models are simulations of the “real world” that can be used to show the impact of changes in a metropolitan area on the transportation system (such as addition of a new road or transit line or increases in population or employment). Specific types of transportation models are not required by federal planning regulations. After articulating a vision of overall transportation goals and considering alternative ways of reaching those goals, federal laws and regulations require that states and metropolitan areas document their decisions about future transportation needs and their selection of federally funded surface transportation projects through long-range transportation plans and short- range programs. A metropolitan long-range transportation plan identifies transportation needs for at least the next 20 years, but does not necessarily identify specific projects. It is expected to include a description of congestion management strategies as well as capital investments and other measures necessary to (1) ensure the preservation of the existing transportation system and (2) make the most efficient use of existing transportation facilities to relieve congestion and enhance the mobility of people and goods. A state long-range plan is expected to be developed in cooperation with MPOs in the state and to be intermodal and statewide in scope. (see fig. 1). In contrast to the long-range plan, a short-range program covers a more limited time frame—usually about 3 years—and describes specific transportation projects or phases of an included project, including the scope and estimated costs of those projects. In a metropolitan short-range Transportation Improvement Program (TIP), MPOs are required to identify the criteria and process for prioritizing proposed transportation projects, including the extent to which comparisons among modes were considered. In addition, all surface transportation projects that are to receive federal funding must be included in the metropolitan and state programs to receive federal funds. At the state level, each state DOT is expected to work cooperatively with its MPOs to develop a single State Transportation Improvement Program (STIP), which is an intermodal program of projects encompassing all the areas of the state. The STIP incorporates TIPs developed by the MPOs within the state, and a project in a metropolitan region must be included in the TIP before it may be included in the state program. Once adopted by the state, the STIP is concurrently submitted to FHWA and FTA for approval at least once every 2 years. In addition to approving the STIP, FHWA and FTA are also responsible for certifying that the state planning processes are conducted in accordance with all applicable federal requirements. Under federal requirements, states and MPOs must specify funding amounts and sources for transportation programs and projects. States and MPOs must consider funding needs for both new projects and the maintenance and operation of the existing transportation system. Financial planning is part of both the short- and long-range planning processes and includes identification of resources that are reasonably expected to be available. Projects in the TIP and STIP are specifically linked to funding sources and additional strategies for securing funds are included in the plan. While federal requirements specify that all MPOs have an analytical process in place to help prioritize and select projects, how projects originate and are selected for inclusion in transportation plans and programs may vary in different regions. In some instances, state DOTs are heavily involved in the metropolitan planning process. For example, the Illinois DOT heavily influences the planning process in metropolitan Chicago. In contrast, by state law, California has chosen to give more planning and decision-making power to counties by directly allocating a greater share of transportation funds to the counties. Another defining characteristic of transportation project development in the sites we visited in California is direct citizen involvement in selecting transportation projects through local ballot initiatives. Federal legislation has identified many factors that states and metropolitan areas are to consider in planning and deciding on surface transportation investments. As shown in table 2, these factors include environmental compliance, safety, system maintenance and operations, and land use, among others. For example, transportation planners and decision-makers must develop alternatives and select projects that conform to the requirements of a variety of laws, such as the National Environmental Policy Act (NEPA) of 1969 and Title VI of the Civil Rights Act. Under NEPA, federal agencies must assess the impact of major federal actions significantly affecting environmental quality. Agencies document these analyses in environmental impact statements. This analysis serves two principal purposes: (1) to ensure that agencies have available detailed information concerning potentially significant environmental impacts to inform their decision-making, and (2) to ensure that the public has this information so that it may play a role in both the decision-making process and the implementation of the decision. In analyzing the effects of a proposed action and alternatives, agencies must assess a variety of effects—including ecological, economic, and social. Agencies may include or refer to benefit-cost analyses in environmental impact statements. However, for purposes of complying with NEPA, the weighing of the merits and the drawbacks of the various alternatives need not be displayed in a monetary benefit-cost analysis and should not be when there are important qualitative considerations. When it is uncertain whether the proposed action would have significant environmental effects, agencies use environmental assessments to determine whether the proposed action would have such effects and therefore whether an environmental impact statement is necessary. Environmental assessments are relatively brief documents that need not include detailed effects analyses. Most transportation projects do not require the preparation of the more detailed environmental impact statement. In addition to requirements for environmental assessments or environmental impact statements, in metropolitan regions that have significant air quality problems, transportation plans and programs must conform to the State Air Quality Plans, which outline strategies for reaching compliance with air quality standards established by the U.S. Environmental Protection Agency (EPA). To meet these standards, states and MPOs in these designated regions must identify transportation projects that will help reduce motor vehicle emissions. Title VI of the Civil Rights Act of 1964 prohibits discrimination on the basis of race, color, or national origin in programs and activities that receive federal financial assistance. To comply with Title VI, DOT issued regulations requiring recipients of federal transportation funds to provide assurances of compliance, periodic compliance reports, and access to relevant information about compliance. The regulations require that each MPO state that its planning process is in compliance with Title VI, as well as other statutory requirements. Both Title VI and NEPA require involvement and input from the public, interest groups, resource agencies, and local governments throughout the transportation planning and project development process. Other than the NEPA requirements for environmental analyses, federal requirements give states and MPOs considerable flexibility in selecting specific analytical tools and elements used to evaluate projects and make investment decisions. For most surface transportation projects, current planning regulations require only that states (in coordination with MPOs) establish a process to conduct data analyses and evaluate alternatives for transit and highway projects. In defining the factors to be included in such an analysis, the requirements specify in general terms that states should consider identified social, economic, energy, and environmental effects of transportation decisions. Federal planning requirements also state that the metropolitan planning process should consider the cost-effectiveness and financing of alternative investments to meet transportation demand, support efficient transportation system performance, and consider the related impacts on social and economic development, housing, and employment goals. However, the requirements do not provide guidance to the states and MPOs on the types of analyses that are required or how they are to be prepared. An exception to this approach applies to major transit system projects eligible for capital investment grants and loans under FTA’s New Starts program. Under this program, FTA identifies and funds fixed guideway transit projects, including heavy, light, and commuter rail, ferry, and certain bus projects (such as bus rapid transit). In contrast to other FHWA and FTA programs where funds are distributed through statutory formulas, funding commitments for the New Starts program are made for specific projects, and projects are evaluated at various stages in the development process. For New Starts projects, federal requirements are more specific in terms of the types of data to be collected, the criteria for conducting an analysis, and the factors involved in evaluating a proposed project. For example, to be considered for possible New Starts funding, local project sponsors must prepare an alternatives analysis on the benefits, costs, and impacts of alternative strategies to address a transportation problem in a given corridor. While FHWA and FTA guidance does provide some technical assistance on the use of various analytical tools and models, neither FHWA nor FTA advocates the use of any particular set of analytical tools, except for the New Starts program. In addition, according to a 1999 National Cooperative Highway Research Program report, decision-makers are often uncertain about the appropriate use of analytic tools, including their usefulness, reliability, and data requirements. Furthermore, FHWA officials note that there currently is no minimum set of elements that are required to be included in an analytical model. In fact, FHWA officials point out the difficulty of establishing a consensus on modeling standards, especially since the use of tools or models varies from one region to the next. As a result, states and MPOs have largely been responsible for identifying and performing their own analyses during the planning process. Although the federal framework does not require the use of any particular tool, federal guidance advocates using benefit-cost analysis to evaluate investments. Benefit-cost analysis facilitates sound transportation investment decisions by integrating the effects of a potential alternative into a common monetary measure for comparison with other alternatives. In assessing the relative benefits and costs of each alternative, the analyst attempts to integrate social, economic, energy, and environmental impacts, in accordance with federal guidance, directly into the benefit-cost analysis. Research and best practices indicate that key steps of the analysis include defining the project objectives, identifying all reasonable alternatives, and systematically evaluating and comparing the projected effects of each alternative. Upon completion of the analysis, the decision- maker can derive useful information about the trade-offs among alternatives and identify the alternative that results in the greatest estimated net social benefit to society. Researchers acknowledge several practical challenges of benefit-cost analysis, such as difficulties in quantifying some benefits and costs and defining the scope of the project. However, major transportation groups, such as DOT and the National Research Council’s Transportation Research Board (TRB), continue to work on guidance and provide resources to improve and simplify benefit- cost analysis and other analytic tools for practitioners. While federal planning regulations for transportation generally do not require the use of specific analytical models, several federal sources have identified benefit-cost analysis as a useful tool to help decision-makers determine trade-offs between alternatives and identify projects with the greatest estimated net social benefits. For example, Executive Order 12893 states that expected benefits and costs should be quantified and monetized to the maximum extent practicable when evaluating federal infrastructure investments in the areas of transportation, water resources, energy and environmental protection. Moreover, guidance from OMB on the planning of federal capital assets suggests that the selection of alternatives should be based on a systematic analysis of expected benefits and costs. DOT encourages and provides guidance on the use of benefit-cost analyses in decision-making for transportation planning. In addition, in the past, we have encouraged the use of benefit-cost analysis in other areas such as freight transportation. Unlike most other types of analysis, benefit-cost analysis allows analysts to integrate multiple effects into a common monetary measure for assessment of a wide variety of alternatives. As discussed earlier in this report, federal guidelines encourage decision-makers to consider the potential social, environmental, and safety effects of transportation projects. Many tools and methods exist to analyze these effects separately, including models that forecast travel demand, emissions measurement tools, and other types of analyses. (See app. II for a comparison of benefit- cost analysis to other economic analyses.) However, benefit-cost analysis integrates and monetizes the quantifiable benefits and costs of each alternative, including the results of some of these other models. Therefore, benefit-cost analysis provides a more thorough assessment of the alternatives than an analysis of any single impact area. Benefit-cost analysis is a systematic approach to evaluating alternative investments that attempts to quantify and monetize benefits and costs accruing to society from an investment. This analysis examines the immediate effects of the investment on the people using the investment and the effects that accrue to nonusers as a result of the investment. Examples of effects on users of transportation investments are reduced travel time and improved safety for drivers and transit passengers. An example of an effect on a nonuser is a change in pollution levels. From research and guidance on transportation investment analysis and our own previous work, we identified 10 steps that an analyst should perform for sound benefit-cost analysis, as shown in table 3. (See app. III for a detailed discussion of each of the key elements of the analysis.) In addition to assigning a single monetary value to each potential project, benefit-cost analysis provides decision-makers with valuable information for comparing investment alternatives. Specifically, benefit-cost analysis informs decision-makers about the relative merit of alternatives by systematically assessing and placing monetary value on the favorable and unfavorable effects of various investment options. That is, researchers state that benefit-cost analysis can help decision-makers better understand the implications of each alternative and make trade-offs between investment options more transparent. This process encourages objective analysis and can expose possible biases in decision-making. The systematic process of benefit-cost analysis also helps decision-makers because it organizes information about the alternatives and converts dissimilar values, such as hours of travel time and number of accidents, to a comparable dollar measure. Researchers highlight benefit-cost analysis as a useful organizational tool because it aggregates key information relevant to the investment decision in a meaningful way. In addition, benefit-cost analysis offers a comparison of the benefits and costs that might accrue over time—including projected future operating costs and benefits to society that might not materialize immediately—and converts them to values in a single time period for more accurate comparison. In commenting on a draft of this report, FHWA noted that the discipline of going through the steps of benefit-cost analysis also could disclose important, timely information for public officials, planners, designers, and the public, even if the data and methods used in the analysis are imperfect. Such timely information can facilitate decision-making. During our site visit to Chicago, railroad officials noted the value of benefit-cost analysis in a practical application. The Chicago Regional Environmental and Transportation Efficiency project (CREATE) is a $1.5 billion plan that includes more than 70 infrastructure improvement projects to increase the efficiency and reliability of freight and passenger rail service, reduce highway congestion, and provide safety and environmental benefits in the Chicago area. Benefit-cost analysis was key in the decision to proceed with this public-private partnership, according to several railroad executives. Project sponsors used an extensive model of the Chicago regional railroad network to help determine the effects of various upgrades to the network. The model showed the extent to which CREATE would resolve freight rail congestion problems—rather than merely pushing them to another location in the regional railroad network. Using the results of this model, benefit-cost analysis was critical in identifying the highest return on investment for individual project segments across the Chicago rail system and helping to illustrate public and private benefits. Benefit-cost analysis also helped provide a calculation of the level of benefits that private railroads would receive from the project, thus providing an estimate of the level of financial contribution that the railroads should make. While the results of benefit-cost analysis aid decision-makers in selecting between alternatives, guidance on benefit-cost analysis advises decision- makers to augment the results of the analysis by considering other factors when weighing investment alternatives. Such other factors, like public participation and equitable distribution of benefits, are those that cannot be quantified or incorporated directly into the analysis due to practical challenges of benefit-cost analysis or limitations of the underlying information. Although guidance from many federal agencies encourages the use of benefit-cost analysis as a useful tool for assessing the potential effects of transportation projects, such analysis has several practical challenges. One challenge is that while benefit-cost analysis evaluates the net benefits of projects, it does not usually consider the distribution of benefits across locations or populations or other equity concerns that may exist with transportation investment projects. Moreover, the outcome of benefit- cost analysis is a net value and therefore inherently eliminates any distinction between groups of citizens to whom benefits accrue. By summing the individual gains and losses to determine the effect on society as a whole, benefit-cost analysis assumes that each individual’s gains or losses should be valued equally with any other individual’s gains or losses. For example, FHWA guidance notes that benefit-cost results might disproportionately rank projects in urban areas over those in rural areas because of the higher level of benefits urban projects may generate. Another practical challenge of benefit-cost analysis is monetizing some impacts of transportation improvements, such as reductions in emissions, travel time saved, and increased safety and reductions in fatalities. Although agency guidance exists, researchers do not always agree on the appropriate methods and assumptions for valuing these effects. For example, a report by the National Cooperative Highway Research Program (NCHRP) cites several outstanding issues in placing economic value on the time people spend traveling, such as (1) the fraction of the wage rate that should be used for work-related travel and personal travel, (2) whether to apply the same time value for very short periods of time saved as for longer periods, and (3) how to account for variation of travel time. Furthermore, debate surrounds the appropriate value of saving one statistical life through an improvement in safety; some advocates assert that human life is priceless and cannot be measured in monetary terms, while some researchers state that monetizing the impact of a reduction in fatalities leads to more complete analysis. In commenting on a draft of this report, FHWA said that although there is some debate about the monetary value of some impacts of transportation improvements, there is also much about the valuation of impacts that economists can agree on. For example, FHWA noted that monetary values available in agency guidance can be assigned to the performance measures—such as travel time saved—that are already calculated by regional models in order to aid the evaluation of proposed transportation projects. Another challenge of implementing benefit-cost analysis is properly scoping the alternatives to analyze. Benefit-cost analysis is typically practiced as a way to compare one project against one or more individual projects rather than evaluating a system of projects. FHWA guidance cautions against evaluating a project that is actually a combination of two or more independent projects because an inefficient project might be hidden in the aggregate result. If multiple projects are aggregated and the net benefits of the group of projects are calculated, the result might indicate that the group of projects results in greater total benefits than the total costs incurred. However, one or more of the individual projects might not result in benefits greater than its costs if it were analyzed separately. Other research shows that analyzing each project independently and selecting projects without regard to the interrelation of the project outcomes can lead to selection of a combination of projects that do not maximize net benefits to society. In other words, one project, such as traffic signal coordination, might complement another project, such as a dedicated bus lane. In such a case, independent assessment of each project would not reveal the full benefits of implementing both projects. According to FHWA, in cases where projects are significantly interrelated, but not dependent on each other to produce net benefits to society, the effects of one project on another (e.g., changes in traffic) should be included in the analysis. Finally, because benefit-cost analysis integrates the effects of many different impact areas, it carries with it the challenges of forecasting and measuring the effects in those areas. For example, travel demand models forecast future use of the transportation system; therefore, their outputs become inputs to benefit-cost analysis. According to a TRB report, though travel demand models have been commonly used for 4 decades, few universally accepted guidelines or standards of practice exist for these models or their application. Practitioners’ views on appropriate methods vary because each organization conducting analysis tailors the forecasting approach to its region’s characteristics, available data, and the preferences and knowledge of the staff doing the analysis. The resulting uncertainty over the best approach to forecasting is an important challenge because such uncertainty can lead to imprecise or inaccurate inputs, which can severely affect the outcome of the analysis. For example, research on an emissions model highlights uncertainties in the data used to estimate reductions in vehicle emissions from congestion mitigation and concludes that these uncertainties lead to large uncertainties in the model outputs. Several major transportation organizations—TRB, FHWA, FTA, the Association of Metropolitan Planning Organizations, the American Association of State Highway and Transportation Officials (AASHTO), and the American Public Transportation Association (APTA)—conduct research to help MPOs address some of the practical challenges of implementing benefit-cost analysis, as well as other analytic tools. For example, FHWA has developed a “Toolbox for Regional Policy Analysis” that offers guidance on a variety of techniques, including benefit-cost analysis, that MPOs can use to evaluate investment alternatives. MPOs also may adopt best practices developed by other MPOs or use consultants to assist with analysis and modeling. Initiatives such as the Transportation Planning Capacity Building Program—sponsored by FHWA and FTA— offer peer exchanges, roundtables, and workshops to facilitate such information sharing. In addition, many studies that are relevant to analysis and decision-making come from two major applied, user-oriented research programs—the NCHRP, which focuses on highway research and the Transit Cooperative Research Program (TCRP). In both programs, practitioners and other potential users of research results are involved in identifying their research needs, participating in selecting projects, and helping guide projects. When research is complete, TRB publishes and widely disseminates the research findings. Several experts have indicated that while transportation researchers have devoted considerable attention to developing detailed guidance on analysis and modeling, they anticipate an increasing emphasis on this issue. They emphasized that TRB is likely to lead a major analysis to review and improve the state of the practice in modeling transportation impacts, benefit-cost analysis, and other tools. While transportation decision-makers consider analyses, such as benefit- cost analyses, in investing resources to meet transportation needs, analyses often do not have a decisive impact on the final investment choices made by states and MPOs. According to transportation research, planning officials, and our prior work, other factors play a greater role in shaping decisions. For example, the federal funding structure for surface transportation and federal program incentives tend to focus decision- makers’ attention on highway and transit projects and stakeholders rather than on railroads or other freight concerns. Moreover, there are relatively few instances in which decisions involve trade-offs among the various transportation modes to meet passenger and freight mobility needs, according to local planning officials. Decision-makers also are required to seek public input and involve a wide range of public and private stakeholders in reaching a consensus on investments. Ensuring that investment choices will maintain the existing infrastructure or improve its operation, rather than expand the transportation system’s capacity, also appears to be an important priority for decision-makers. Finally, decision- makers are recognizing the importance of longer, multistate transportation corridors and the special challenges that they pose for investment decisions. MPOs, especially in major metropolitan areas, produce a substantial amount of analysis and modeling, according to transportation experts we interviewed. The results of such analyses can be a factor in transportation investment decision-making. For example, as noted previously in this report, transportation decision-makers in Chicago stated that the results of benefit-cost analysis had factored into their decision to implement the CREATE project. However, such analyses do not appear to play a decisive role in many investment decisions, although they may help rule out bad investments and point out serious problems. For example, planners in Los Angeles noted that the projects selected for the TIP were not necessarily the ones with the highest benefit-cost ratios, although their analysis showed that every project in the plan did generate more benefits than costs. In addition to the limitations of benefit-cost analysis we discussed previously in this report, decision-makers may not be relying upon analyses, in part, due to various concerns about the usefulness and reliability of the analyses, according to the transportation research literature and our interviews with experts and officials in Chicago, Los Angeles, and San Francisco. State DOTs and MPOs have expressed uncertainty about the usefulness of analytical tools in guiding their transportation planning and decision- making. For example, states and MPOs view existing analytical tools as having limited usefulness in comparing investment alternatives among transportation modes and between passenger and freight investments. TRB’s applied research programs are trying to address this need through development of specific tools to help in making multimodal trade-offs. In addition, understanding how and when to use analysis is challenging for decision-makers. During our site visits, we found few instances in which investment decisions involved direct cross-modal trade-offs, such as railroad versus highway. According to a NCHRP survey published in 2001, 88 percent of state DOT respondents and 85 percent of MPO respondents reported that more useful guidelines—such as a guidebook for agency use in applying methods and analytic techniques—was either badly needed or would help to enhance the agency’s ability to evaluate the social and economic effects of transportation system changes. Accordingly, the study concluded that decision-makers need to be able to better select when, how, and why to use particular analytic tools in investment decisions. There are also concerns about data used in the analyses. Insufficient state and local data—particularly freight-related data—limits the quality and amount of analysis and modeling, according to NCHRP research and our December 2003 report. The lack of metropolitan level data, which is needed to analyze investment alternatives, has been a continuing concern in transportation research. For example, data needed to identify heavily traveled highways and freight bottlenecks, and to develop and evaluate alternative solutions for addressing such congestion (e.g., comparing the benefits of improving highway operations to the benefits of adding new road capacity), is not always available. Furthermore, data needed to apply a specific analytic tool may not be available or funds may not be sufficient to acquire or collect needed data. Compounding the problem, existing modeling software cannot always successfully accommodate the data limitations to yield results that are credible and usable. In the NCHRP survey of state DOTs and MPOs published in 2001, 82 percent of state DOT respondents and 97 percent of MPO respondents reported that better data to analyze social and economic effects either were needed badly or would help enhance the agency’s ability to evaluate the social and economic effects of transportation system changes. Freight data pose special challenges because shifting product mix, trade patterns, and consumer demands make freight a fast-changing area. The U.S. Bureau of Transportation Statistics reported in 2003 that there is a consensus that existing freight data often are too outdated to capture current freight status, many data elements are missing, and data often cannot be compared across modes. TRB and we have made recommendations to improve freight data. TRB recommended that resources be focused on developing a national freight data program that targets the needs of transportation analysts and planners. We recommended in our December 2003 report that DOT facilitate the collection of freight-relevant data, which would allow state and local planners to develop and use better evaluation methods such as demand forecasts, modal diversion forecasts, and estimates of the impacts of investments on congestion and pollution, thus providing a better basis for making transportation investment choices. FHWA has developed a Freight Analysis Framework (FAF) designed to estimate the flows of commodities and related freight transportation among states, substate regions, and major international gateways. The FAF also forecasts changes in the flows due to changes in economic conditions, transportation facilities, or other factors. FHWA is currently working to improve the FAF by improving the accuracy of freight flows, updating sources used in the model, and possibly incorporating new data sources and forecasting methods. Other considerations affect decision-makers use of analyses, such as how competently the analyses are interpreted and how well analyses are communicated, according to a transportation researcher. TRB and we have expressed a concern about impending shortages of skilled transportation professionals with expertise to choose and use analytic tools and communicate their results. Timing also can have an impact on the use of analysis. A local official observed that analyses that come later in the decision-making process may be viewed as the most relevant because they reflect the most current information available as projects are being considered. Concerns also have been raised about the ability of MPOs to produce and disseminate quality analyses that aid investment decision-making, given their broad scope of responsibilities and current funding levels. A recent study of metropolitan decision-making in transportation concluded that although MPOs have been given new planning responsibilities in areas such as environmental justice, job access, freight planning, and systems operations, highway program funding for metropolitan planning has not increased. DOT officials also told us that local budget constraints complicate the ability of MPOs to deliver quality data analysis because analysis is usually the first thing to be cut. During our Chicago site visit, a transportation consultant expressed concern that the MPO for that area is very thinly funded for the work that it is being asked to perform. In evaluating and deciding on investments, the structure of federal funding and the lack of freight stakeholder involvement are important factors that focus decision-making principally on highways and transit and on stakeholders associated with these modes. In addition, during our site visits, we found few instances where investment decisions considered direct trade-offs between modes or between passenger and freight issues. ISTEA, TEA-21, and federal planning guidance all emphasize the goal of establishing a system wide, intermodal approach to addressing transportation needs. However, the reality of the federal funding structure—which directs most surface transportation spending to highways and transit, rather than railroad infrastructure—plays an important role in shaping MPO investment choices. In fiscal year 2001, for example, federal transportation grants to state and local governments totaled about $27.8 billion for highway programs, $7.0 billion for transit programs, and $37 million for railroad programs. The federal financial support for highways and transit systems comes mainly from federal highway user fees (i.e., fuel taxes deposited into the Highway Trust Fund), with the revenue generated from these fees generally targeted for highway or transit projects. While most federal funding sources and programs are linked to highway or transit uses, some funding flexibility between highway and transit is allowed under programs such as the National Highway System, Surface Transportation Program (STP), and Congestion Mitigation and Air Quality Improvement (CMAQ) programs. Federal programs provide limited support for investment in railroad infrastructure, with railroad investments largely financed by the private sector. In addition to the federal transportation grants to state and local governments discussed above, the federal government also provides some support to Amtrak for intercity passenger rail service. For example, in fiscal year 2003, the federal government appropriated about $1 billion to Amtrak to cover operating and capital expenses. However, the role of the federal government in providing financial support to Amtrak is currently under review amid concerns about the corporation’s financial viability and discussions about the future direction of federal policy toward intercity rail service. Regarding freight rail projects, the private sector owns, operates, and provides almost all of the financing for freight railroads, with the public sector providing the supporting infrastructure—such as highways, ports, and intermodal facilities. Innovations in ISTEA and TEA- 21 allowed states more flexibility to use federal funds for freight projects, established public-private partnerships, and allowed the expenditure of federal aid on nonhighway freight projects in certain circumstances. A number of concerns have been raised about the availability of funding for railroad infrastructure, particularly for intermodal investments that could improve freight mobility. For example, AASHTO has reported that, although the railroad industry’s return on investment has improved, it still is below the cost of capital, a factor that might adversely affect future railroad infrastructure investment levels. In addition, we reported in December 2003 that access to funding sources for freight railroads—such as the National Corridor Planning and Development Program and the Coordinated Border Infrastructure Program—has been limited because, according to FHWA, these programs are oversubscribed and much of the funding for these programs has been allocated to congressionally designated projects. In addition, National Corridor Planning and Development Program funds may not be used for improvements on railroads’ heavy-use “mainline” tracks. Furthermore, given the intermodal nature of freight projects, the overall lack of flexibility for using federal transportation funding across modes limits the availability of funding for improving railroad and freight infrastructure. For example, the eligibility criteria under the Transportation Infrastructure Finance and Innovation Act do not allow assistance to privately owned facilities, such as privately owned rail infrastructure. Local planning officials we interviewed expressed concerns that limited public funding for freight railroad investments might limit regional options for addressing infrastructure requirements. For example, one local planning official told us that the lack of flexible funding limited that city’s ability to address freight-related problems. A regional planning official noted that while CMAQ money has some flexibility, the federal funding structure narrows the ability to make optimal intermodal choices. Our December 2003 report on freight transportation pointed to another concern about freight decision-making—that state and local transportation planning and financing is not well suited to addressing freight improvement projects. At the local level, planning is oriented to projects that clearly produce public benefits, such as passenger-oriented projects. While freight projects also may produce public benefits by reducing freight congestion, they often can have difficulty securing public funds because they may generate substantial private sector benefits. For example, in California, local planning officials told us that State Transportation Improvement Program (STIP) funds could not be used for freight railroad improvements unless there were distinct benefits for passenger movement. Unlike passenger projects, it may be more difficult to identify clear-cut public benefits associated with freight railroad projects and balance them with private benefits. In California, local planning officials said they consider railroad improvements to be at a disadvantage in public referenda on transportation improvements because public support for freight and railroads is lacking. Chicago officials acknowledged that the lack of federal funds for freight projects limits the region’s investment options and local governments’ interest in spending their own funds on freight projects, such as the CREATE project. Finally, railroad industry investment criteria are not always aligned with the goals of the states and MPOs. While freight railroad industry investments may meet the internal industry tests of providing revenues, profits, and financial feasibility, they may not deal adequately with national transportation concerns, such as improving mobility, reducing nationally significant chokepoints, and enhancing system capacity. Several other considerations limit freight stakeholder involvement in local investment decisions—potentially affecting the MPOs’ ability to take a system wide, intermodal approach to addressing transportation needs. Although MPOs are required to consider freight needs, reflecting the concerns of freight stakeholders—such as freight railroads—in decision- making has proven challenging. For example, the Chicago region has been particularly active in involving freight railroads in the MPO’s Intermodal Advisory Task Force. But a railroad official, who described the railroad companies’ interaction with the MPO, nevertheless saw the need to modify the long-standing, local decision-making process so that freight railroads have a clearer role in investment decisions. Railroad officials in Chicago also cited the unfamiliarity of planners and decision-makers with freight operations as an obstacle to freight investments. They noted that many local officials and transportation agencies do not have a clear understanding of how freight operates, including the complexities of a consumer goods distribution system that typically starts in Asia or other areas of the world. However, several Chicago officials believed that the CREATE project may help change this situation by providing a plan to improve freight rail efficiency and freight rail’s interface with passenger transportation, and by giving freight more visibility with local officials. The freight industry may face other challenges in participating in transportation decision-making. For example, freight railroad companies operate in many states—each with numerous MPOs in their borders. A railroad executive noted that if all MPOs were serious about freight issues, companies could not handle the demands on their resources to participate. The freight industry also has long-standing concerns about working with the public sector. A railroad official we interviewed said that federal rail regulation left a lingering legacy of industry distrust of the government. In addition, freight railroads have long made their own investment decisions and supplied their own capital—with no public sector influence. As private entities that own most of the nation’s railroad infrastructure, freight railroads typically have not worked with the public sector because of concern about requirements and regulations that are tied to federal funds, unless a proposed infrastructure project will yield financial returns for the company. In addition, the lengthy planning and construction time associated with public infrastructure projects does not match the shorter planning and investment horizons of private companies. In addition to the focus on highways and transit over railroad investment choices, during our site visits we also found that cross-modal comparisons play a limited role in transportation investment decisions. We found limited instances in which investment decisions involved direct trade-offs in choices between modes or users—such as railroad versus highway or passenger versus freight. Officials in Chicago indicated that railroad and highway investments, and passenger and freight projects, rarely are in direct competition—perhaps because railroads and highways often serve different needs or markets. An official in Los Angeles commented that planners there avoid making modal comparisons because they view them as comparing “apples to oranges.” In Chicago, an official described only a few situations that posed modal choices and trade-offs for decision- makers, for example, deciding between a transit alternative versus adding lanes to an existing tollway. Several researchers told us that whether planners and decision-makers make cross-modal and passenger-freight comparisons may be a moot point because local conditions, such as the physical environment often dictate modal choices. For example, metropolitan areas that are adjacent to a seaport may have few choices about whether to use highways or railroads to move products to and from the port. Space constraints and existing infrastructure, as well as the characteristics of freight (i.e., ports that handle bulk commodities such as coal or grain usually use railroads, while ports that handle computers usually use trucks), foreclose choices. Overall, moving freight usually offers fewer transportation choices than moving passengers, an expert noted. In addition, the demographic or other characteristics of specific transportation markets—such as a growing area with many transit commuters—also may determine modal choice. Metropolitan decision-making is designed to be a collaborative process that involves the public and its diverse concerns in identifying actions to improve transportation system performance. MPOs are required to seek public comment and have clear federal guidance on involving the public— it is integral to their mission and one of their core functions. Moreover, the definition of the public is wide ranging—virtually all private and public individuals and organized groups that are potentially affected by transportation decisions in a given area. Federal regulations also state that MPOs must cooperate with the state and local transportation providers such as transit agencies, airport authorities, maritime operators, rail-freight operators, Amtrak, port operators, and others. MPOs are directed to provide the public with meaningful opportunities to provide input on transportation decisions and are expected to consider public input on the full range of financial, social, economic, and environmental consequences of their investment alternatives. Public participation can introduce considerations such as quality of life and other issues that are difficult to quantify in making transportation choices. It also puts decision-makers in the position of balancing different public agendas about funding and values, according to a transportation researcher. Funding conflicts may arise between modes or from concerns about spreading benefits across the metropolitan area. Value conflicts may result from public concern about a potential project’s impacts on a neighborhood or the environment. As we observed in our site visits, public participation can play an influential role in transportation investment decisions. In California, public views often are expressed in county-level ballot box initiatives on the sales taxes and municipal bonds that finance transportation projects. Whether voters approve these initiatives is a significant factor in the investment decision-making process because of the growing prominence of local sales taxes in funding transportation projects. Local sales taxes have surpassed user fees as the primary source of funding for new transportation project construction in California because fuel tax revenues have not kept pace with travel volume and systems costs. The need for voter support may result in a greater number of transportation investment proposals that clearly identify public benefits for local constituents. In Chicago, an official noted that when an expressway extension with a High Occupancy Vehicle lane was proposed, attendees at public meetings opposed the project and endorsed additional mass transit service instead. Besides public input, other political considerations also shape investment decisions. The metropolitan planning process emphasizes the importance of achieving stakeholder agreement on the set of projects that constitute the MPO’s plan. One researcher said that achieving consensus often is difficult—especially with regard to completing large-scale projects—even when decision-makers are like-minded professionals. Arriving at a consensus puts a premium on how well local elected and appointed officials negotiate and build coalitions to obtain support for projects. Several researchers noted that this need for consensus may elevate the importance of certain political considerations—such as ensuring a rough equity in use of local and state funds for the distribution of transportation projects throughout a metropolitan area—in selecting projects for funding. In addition, state and metropolitan transportation politics may make some organizations, such as state DOTs, large units of local government such as cities and counties, or large transit agencies more influential in planning and project selection than others. This uneven influence may mean that a project’s priority can be determined by which agency sponsors the project. Our site visits also suggest that the relative influence of decision-makers varies across locations. For example, officials in Chicago described the Illinois DOT as having strong influence on metropolitan planning. Furthermore, a recent study indicated that federal and state agency decisions can be very important in determining the scope and composition of key decisions in the Chicago area. By contrast, officials in Los Angeles and San Francisco described local planning agencies, especially county- level Congestion Management Agencies, as most influential. Finally, state decisions to distribute funds across the state may shape investment decisions. For example, California state law requires that 75 percent of State Transportation Improvement Program funds be directly allocated to counties, who work through the county Congestion Management Agencies. However, according to CALTRANS officials, the total funding allocated to the counties is first divided between the counties of northern and southern California, with the 13 southern counties receiving 60 percent of the funds and the balance of California counties receiving 40 percent of the funds. Thus, while modal choices are primarily made at the regional or county level, the choices are constrained by state funding splits, according to CALTRANS officials. Due to infrastructure and space concerns, and time lags associated with new construction projects, state and regional transportation decision- makers are increasingly giving priority to highway investments that preserve, enhance, and maintain the existing infrastructure over investments in new construction. According to FHWA data, of the $64.6 billion spent nationally in 2000 on highway capital improvements, 52 percent ($33.6 billion) of all funds were spent on system preservation, 40 percent ($25.9 billion) on new roads and expansion of existing roads, and 8 percent ($5.1 billion) on the installation of system enhancements, such as safety enhancements. The amount spent on system preservation rose from 45 percent of capital improvements nationally in 1993 to 52 percent in 2000. In addition to the money spent on system preservation, all levels of government spent $24.2 billion on routine maintenance in 2000. In our site visits, we found that system preservation and operations and maintenance activities were high priorities for local transportation officials. For example, in Chicago, planners told us that in the space- constrained Chicago area, the primary strategy has been to periodically rebuild existing infrastructure rather than build new infrastructure. In California, both the Southern California Association of Governments (SCAG) and the Metropolitan Transportation Commission in Northern California spend approximately 80 percent of their regional budgets on maintenance and operations. SCAG officials pointed out that regions such as Los Angeles and San Francisco tend to focus less on capital improvements due to capacity and infrastructure limitations. Some situations offer few alternatives for expansion from the onset. Infrastructure that is old and inadequate, such as underpasses or tunnels with insufficient clearance, often has limited expansion potential. Further complicating new construction is the limited supply of available land. Densely populated urban areas, where space is at a premium, offer few alternatives for expansion due to geographic constraints on the surrounding development. In addition, land-use planning and zoning issues can be highly contested in a space constrained real estate market. Capacity constraints and costs of new construction are forcing decision- makers to look at alternate solutions and place a premium on maintaining and improving the existing transportation system. System preservation and maintenance and operations improvements are also preferred because they offer quicker remedies than new capital projects, which can take almost 20 years to plan and build. A key reason for the length of time to complete projects is the set of federal and state requirements, which include clean air, water quality, historical preservation, New Starts reporting, and public input requirements that were discussed earlier. However, the length of time for project development is also influenced by the diffusion of authority over transportation decisions and the resulting complexity of the decision- making process. Changes in local priorities, lack of local matching funds, and locally driven changes in project scope are often associated with project longevity. Requirements for benefit-cost and other economic analyses could extend the length of time for project development. One local planning official noted that the long lag time for new projects acts as a disincentive for planners and officials when considering capacity expansion projects. Transportation decision-makers operate in an environment where they must consider preexisting factors and needs when making transportation investment decisions. Finally, corridors that extend across multiple state and local boundaries pose challenges for intermodal transportation decision-making due to coordination and cross-jurisdictional issues. A majority of investment decisions are made at the state and local levels, with local planners tending to focus on local and regional planning needs, as opposed to larger corridor needs. Getting the cooperation of and coordinating with multiple agencies, communities, and transportation modes—each with its own priorities—makes the planning and implementation of multistate and multiregion projects difficult. Further complicating this type of planning is the variety of approaches used by the local and regional agencies in analyzing projects. The type of transportation modeling used in one location may not be available or used in another. Particularly problematic are interstate corridors that do not provide clear- cut benefits for all states that the proposed corridor crosses, but require that the costs be borne by all states involved. Although state DOTs work to address freight mobility challenges on a statewide basis, many corridors cross state boundaries; and unless states are part of a multistate coalition, states may not address projects that involve multijurisdictional corridors. For example, an Illinois transportation official explained that developing high-speed rail service to the east of Illinois is contingent on whether other states will share the costs. To date, only one other state has been willing to contribute. Similarly, freight infrastructure needs may involve projects along a freight corridor that cuts across the jurisdictions of several transportation-planning agencies and, in some cases, states. For the most part, planning for longer multistate corridors is conducted by ad hoc state coalitions. In the past, the impetus for creating such multistate coalitions has come from state departments of transportation, and the federal government’s role in making these interstate decisions is limited. Generally these ad hoc groups do not receive federal funding. However two groups, the Interstate 95 Corridor Coalition and the Chicago- Gary-Milwaukee Coalition, did receive funding in TEA-21. The Interstate 95 Corridor Coalition, which runs from Maine to Florida, was initially created to examine ITS systems along the corridor but has now widened its focus to include intermodal issues. The coalition developed a railroad operations study for the region, which identified deteriorating transportation system performance in the mid-Atlantic region, noted that all modes of transportation needed to be improved to deal with the situation, and suggested that railroads could play a larger role in meeting the region’s transportation needs. Studies such as this one illustrate the opportunities for these multistate coalitions to analyze problems in a larger corridor. Other such state groupings exist. For example, state DOTs along Interstate 10 have organized an I-10 partnership to conduct research on managing freight movement along the corridor running from California to Florida. The I-10 partnership group developed a transportation planning study based on vehicle volume, traffic flow, and alternative scenario testing for freight movement. Rather than focusing on one particular mode, the study included highways, railroads, and barges in its analysis of freight traffic, and explicitly attempted to be mode neutral. While the partnership study projected the effects of different possible infrastructure improvements along the corridor, individual states are ultimately responsible for deciding whether to implement the study’s findings. In contrast to these multistate groupings, planning for intrastate projects fits more easily into the framework of state planning. For example, in the case of passenger rail corridor development in California, intrastate passenger rail is funded primarily by the state DOT and the localities and operated by state and local joint powers authorities. In some cases, Amtrak serves as the operator for these state-supported routes. Some of these routes are Amtrak’s most heavily traveled outside the Northeast Corridor, including the Capitol Route in Northern California, the San Joaquin Route in Central California, and the Pacific Surfliner Route in Southern California. Planning for proposed routes, such as high-speed and passenger railroad, is facilitated when the route remains within a single state because such projects fit readily into the existing state planning framework. However, many of the corridors that would benefit from such projects involve more than one state. ISTEA and TEA-21 both articulated a goal of moving from a traditional focus on single transportation modes to a more efficient, integrated system that draws upon each mode to enhance passenger and freight mobility. These key pieces of legislation also provided MPOs and states discretion in selecting projects to address local needs and conditions. In exchange, MPOs and states are expected to follow federal planning and program requirements to reflect the national public interest in their decisions. The approach for investment planning and decision-making that emerged from ISTEA and TEA-21 provides guidance on a systematic process for making transportation investment choices and a host of factors to consider, while generally allowing MPOs and states considerable discretion in choosing the analytical methods and tools that will be used to evaluate and select projects. Our work has shown that while much analysis is done by states and MPOs, the results of those analyses do not appear to play a decisive role in many investment decisions, except to rule out the most problematic projects. Instead, other factors play a major role in shaping investment choices, including the federal government’s funding structure that provides incentives for investing in highway or transit projects rather than railroad infrastructure or intermodal projects, public or political support for certain projects, and the practical realities of simply preserving the existing infrastructure. In addition, the data and other limitations associated with using analytical tools, such as benefit-cost analysis, may discourage their use by decision-makers. DOT, TRB, and other major transportation organizations are doing research to improve analytical tools and methods and to help states and MPOs use them to better evaluate investment alternatives. In a prior report, we also encouraged the use of benefit-cost analysis in freight transportation decision-making and recommended that DOT facilitate the collection of freight data that would allow state and local planners to develop better methods for evaluating investments. It is possible that overcoming the challenges of using analytical tools would make them more attractive to decision-makers, thus leading to improved investment decision-making. We provided copies of this report to the Department of Transportation for its review and comment. The department generally agreed with the report’s content and said that the report provided a useful overview of the literature and practice involving transportation investment decisions. The department also provided technical comments, which we incorporated into this report as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to congressional committees with responsibilities for surface transportation programs; DOT officials, including the Secretary of Transportation and the administrators of FHWA, Federal Railroad Administration, and FTA; and the President of Amtrak. We will make copies available to others on request. This report will also be available on our home page at no charge at http://www.gao.gov. If you have any questions about this report, please contact me at [email protected] or by telephone at (202) 512-2834. GAO contacts and acknowledgments are listed in appendix IV. Our scope of work included reviewing the processes that decision-makers at all levels of government use to analyze and select surface transportation infrastructure investments. Our overall approach was to review and synthesize federal requirements, Department of Transportation (DOT) guidance, and the economics literature and transportation planning studies; interview federal transportation officials, national association representatives, and transportation experts to obtain their perspectives; and conduct site visits in three major metropolitan regions to understand how investment decisions are actually made in those regions. To identify the key federal requirements for planning and transportation infrastructure decision-making, we reviewed federal laws and regulations relating to the metropolitan and state planning and funding process, as well as federal guidance provided by the Federal Highway Administration (FHWA) and Federal Transit Administration (FTA) to states and Metropolitan Planning Organizations (MPO) on the transportation planning process. We interviewed transportation officials in the following U.S. DOT offices: Federal Railroad Administration, FHWA, FTA, and the Office of Intermodalism. We also interviewed national stakeholders including Amtrak, the Association of American Railroads, the Association of Metropolitan Planning Organizations, the American Public Transportation Association, and the American Association of State Highway and Transportation Officials. To get regional perspectives on the federal requirements and guidance for transportation planning, we interviewed state and regional transportation officials in California and Illinois. To identify how benefit-cost analysis facilitates sound transportation investment decisions, we reviewed the economics literature, academic research, and transportation planning studies containing evaluations of various economic analytical tools, with an emphasis on benefit-cost analysis. A GAO economist read and reviewed these studies, which we identified by searching economics literature databases and consulting with researchers in the field, and found their methodology and economic reasoning to be sound and sufficiently reliable for our purposes. We interviewed researchers and consultants from the National Research Council’s Transportation Research Board (TRB), DOT, university research centers, national transportation organizations, and selected state DOTs to get their perspective on these analytical tools, the general applicability of benefit-cost analysis, and the feasibility of cross-modal comparisons. In addition, we reviewed our previous studies that had key findings relating to the use of analytical tools in investment decision-making and consulted with our Chief Economist regarding the value of benefit-cost analysis and its challenges. To identify other factors transportation decision-makers consider in evaluating and deciding on investments, we interviewed federal transportation officials and the other national stakeholders identified above. We interviewed transportation researchers from the TRB and, based on their input and that of federal transportation officials, interviewed additional researchers from university research centers—and other think tanks—as well as representatives from civic and private sector organizations who are knowledgeable about transportation investment issues. We also conducted site visits in three major metropolitan regions: Chicago, IL; Los Angeles, CA; and San Francisco, CA. These sites are major centers of passenger and freight traffic and contain a wide variety of planning agencies, transportation issues, and modes. During our site visits, we conducted semistructured interviews with officials from state, regional and local transportation planning agencies, including state departments of transportation, MPOs, city or county transportation planning agencies, and organizations involved in railroad investment issues. From these interviews, we obtained information on each region’s planning and decision-making processes, the factors that drove decision-making in that region, the extent to which analytical tools were used, and other issues affecting the planning and decision-making processes. In addition, we also analyzed planning documents and analytical tools used by these regional decision-makers. The information collected and analyzed from our site visits was intended to illustrate how investment decisions were made in those areas. To ensure the reliability of information presented in this report, we relied to a large extent on studies from the economics and transportation literature that were reviewed by peers prior to publication. A GAO economist reviewed these studies and found them methodologically sound. We also corroborated much of the testimonial information provided during our three site visits by obtaining documentation of investment decision-making processes and results, although we did not test the reliability of specific data contained in reports prepared by officials from those three sites. Additionally, we obtained statistics presented in the introduction of this report about passenger and freight travel growth from DOT; because this information is included as background only, we did not assess its reliability. We conducted our work from September 2003 through June 2004 in accordance with generally accepted government accounting standards. While benefit-cost analysis aims to monetize and compare all direct benefits and costs to identify the alternative that results in the greatest net social benefit, other types of analysis consider different types of impacts to yield different criteria for comparison. Two common types of analysis are economic impact analysis and cost-effectiveness analysis. Figure 2 illustrates the differences between benefit-cost analysis, economic impact analysis and life-cycle cost analysis, a special case of cost effectiveness analysis. Economic impact analysis assesses how some direct benefits and costs of investment alternatives convert to indirect effects on the local, regional, or national economy or on a particular sector of the economy. Examples of indirect impacts are changes in wages and employment, purchases of goods and services, land use, and changes in property values. These impacts result from increased or decreased levels of economic activity caused by the investment and can accrue within or outside of the immediate area of the investment. Economic impact analysis often includes a number of factors other than those that meet the stricter criteria for inclusion in a benefit-cost analysis. As a result, advocates or opponents of a project can use this type of analysis to illustrate implications of an investment other than the estimated net social benefit. However, economic impact analysis is not an appropriate technique for identifying which alternative provides society with the greatest net benefit because often the values of benefits to society are counted twice in different forms in this analysis. Guidance from both TRB and FHWA states that the net direct user benefits included in benefit-cost analysis have the same monetary value as the net indirect benefits and caution that the two are not additive when analyzing an investment for economic efficiency. In other words, indirect impacts are not included in benefit-cost analysis because economists generally agree that they are market transformations of direct benefits. Thus, while economic impact analysis can provide interesting information for policy makers regarding the effects of potential investments on the local, regional, or national economy as well as on specific industries, researchers state that economic impact analysis can be considered complementary to, but different from benefit-cost analysis. Cost-effectiveness analysis is similar to, but less comprehensive than, benefit-cost analysis. This type of analysis attempts to systematically quantify the costs of alternatives. However, cost-effectiveness analysis does not attempt to quantify the benefits of alternatives. Rather, it assumes that each alternative results in achieving the same stream of benefits. Thus, cost-effectiveness analysis identifies the lowest cost option for achieving a given level of benefits rather than identification of the alternative that achieves the greatest benefit per dollar of cost to society. Life-cycle cost analysis, essentially a subset of benefit-cost analysis, is a specific example of cost-effectiveness analysis. Life-cycle cost analysis involves several of the same steps included in benefit-cost analysis, but excludes any assessment of benefits because each of the alternatives compared is expected to result in the same level of benefits. The key elements of life-cycle cost analysis are identifying alternatives, defining a time frame for analysis, identifying and quantifying the costs of each alternative, discounting costs to present values, assessing the sensitivity of the analysis to changes in assumptions, and identifying the alternative that results in the lowest cost over the life-cycle of the project. When identifying and quantifying the costs of each alternative for transportation projects, best practices indicate that analysts should consider construction, rehabilitation, and maintenance costs as well as costs to users associated with work zones during construction and maintenance. Like benefit-cost analysis, these user costs include travel time costs, costs associated with crashes, and vehicle operating costs. From our review of research and best practices on transportation investment analysis, we identified 10 elements integral to sound benefit- cost analysis. Analysts include these steps to ensure a thorough evaluation of the social benefits and costs of investment alternatives and to systematically assess the trade-offs between investment alternatives. Using benefit-cost analysis, as described below, analysts determine the project that will result in the greatest benefit to society for a given level of cost. Analysts first should identify the project objectives to ensure a clear understanding of the desired outcome and to aid in determining appropriate alternative projects to be considered. Reports from TRB and FHWA identify several possible surface transportation project objectives including addressing an existing congestion problem, investing to accommodate expected future demand, generating economic development, improving safety in an area, or increasing mobility for disadvantaged citizens. Identifying the intended outcome at the outset leads to analysis focused on alternative projects that can achieve the stated objectives. For example, if the primary objective were to ease congestion, adding a highway lane or new transit option might be reasonable alternatives to consider; however, if the objective were to improve safety in an area, perhaps other alternatives would be more appropriate. Federal Aviation Administration (FAA) guidance on benefit- cost analysis cautions that the analyst should be careful not to identify the objective in a way that prejudges the alternatives for achieving the objective. For example, an objective stated as construction to address an existing congestion problem ignores the possibility of nonbuild alternatives that might improve the use of the existing system. Establishing a realistic base case provides a reference point against which the incremental benefits and costs of alternatives will be measured. According to FAA guidance, the base case is the best course of action that would be pursued in the absence of a major initiative to meet the investment objectives identified. In other words, the base case should represent existing infrastructure, including improvements that are already planned, as well as on-going maintenance. FHWA guidance states that the base case should be realistically defined including, for example, allowances for changes in traffic patterns with congestion. Failure to allow for such changes in the base case can lead to overly pessimistic assessments of the base case in comparison to alternatives. Given the project objectives and the base case, analysts should identify the investment alternatives capable of achieving the stated objectives to define the scope of the analysis. In generating the list of possible alternatives, analysts should consider options across different transportation modes. For example, alternatives for a congested metropolitan route could include adding a lane to the existing highway, providing new or better bus service, or building a light rail line. Moreover, passenger alternatives for a congested intercity corridor could include high-speed rail, new or expanded air travel, or a new or expanded highway. In addition to evaluating multiple modes, low-cost noncapital intensive alternatives should be considered. These alternatives include Intelligent Transportation Systems (ITS) and demand management approaches. ITS solutions are designed to enhance the safety, efficiency, and effectiveness of the transportation network and are relatively low-cost options for maximizing the capacity of the existing infrastructure. ITS solutions include coordinating traffic signals to improve traffic flow, improving emergency management responses to crashes, and using electronic driver alert boards to notify drivers of congested routes. Similarly, demand management alternatives can relieve congestion without major infrastructure investments. Demand management alternatives are ways of reducing the number of vehicles traveling on a congested route during the most congested times or peak periods. Demand management alternatives encourage drivers to drive during less congested times, or on less congested routes, or to ride together in carpools or vanpools. Charging single occupancy vehicles a toll during congested times on congested routes, providing free or discounted convenient parking for persons riding in carpools or vanpools, and subsidizing transit usage are possible demand management alternatives. Finally, both passenger and freight options for addressing congestion should be considered. Our past work on freight transportation shows that truck use significantly affects highway congestion. For example, officials at the Ports of Los Angeles and Long Beach estimate that truck traffic accounts for about 30 to 60 percent of the total traffic on two particularly congested major highways, which serve as connectors to the two ports. Moreover, independent studies report that shifting greater amounts of freight from highways to rail could relieve highway congestion. Following the identification of alternative projects, analysts should list the relevant impacts of each alternative to ensure that all aspects of a project are considered in the analysis. As previously stated, benefit-cost analysis considers all direct user impacts and externalities, but it does not consider indirect impacts because these are transfers of direct impacts and their inclusion would constitute double counting. Transportation economics research and government agency guidance we reviewed identified the following list of direct user impacts that should be considered for transportation investment decisions: construction, operations and maintenance costs; travel time savings and construction travel time cost; vehicle operating costs; safety improvements; and environmental impacts, such as noise pollution and air pollution. Tolls, fares, or any other user fees should not be included as impacts of the projects, because these are payments made by consumers to receive the benefits already counted in the list above. After identifying the user impacts for each alternative, the analyst must define a single time frame or life cycle for all alternatives over which the benefits and costs will be compared. This element of the analysis is necessary for equal comparison of projects with differing expected future streams of benefits and costs from current investment. Typically, a region constructing major infrastructure investments incurs a majority of the costs of the project within the first years of the life cycle and reaps the majority of the benefits later in the life cycle of the project; therefore, the analyst should choose a time frame that allows for the measurement of benefits and costs expected to materialize throughout the useful life of the investment. The impacts of each alternative should be quantified and monetized as benefits and costs to the greatest extent possible to enable the analyst to compare the value of each project to the alternatives. In addition to compiling the obviously quantitative impacts, like construction and operations costs, the analyst must quantify other identified impacts of alternatives, like emissions reduction. The analyst must then convert those values to dollars so the impacts are expressed in common units. Forecasting tools and benefit-cost analysis models facilitate the process of quantifying and monetizing benefits and costs. Forecasting tools predict future behavior of system users, like travel demand and ridership, for the investment alternatives. Values from the forecasts are used as inputs into a larger model that quantifies and monetizes direct user impacts and quantifiable externalities. Therefore, the accuracy of the forecasts directly affects the accuracy of the analysis. Several widely accessible models of highly varying complexities measure and quantify predicted benefits and costs. These models rely on some assumptions, but also require users to enter location and project specific data to generate estimates, which are used to assess the overall net benefit of alternatives. Therefore, the outcome of the analysis depends, in part, on the quality of the model used for calculations of benefits and costs. After monetizing the direct user benefits and costs, the analyst converts all values to present dollar values to allow an accurate comparison of projects with different levels of future benefits and costs. The dollar values of the benefits and costs of each alternative cannot simply be summed over the life of the project to calculate the total. Benefits and costs incurred in the future have lower values than those incurred in the present because, in the case of benefits, the benefits cannot be enjoyed now and, in the case of costs, the resources do not need to be expended now. In other words, benefits and costs are worth more if they are experienced sooner because of the time value of money. Therefore, analysts must convert future values into their present equivalents to compare benefits and costs expected in the future with benefits and costs incurred in the present. This conversion requires the use of a discount rate, which represents the interest rate that could be earned on alternative uses of the resources. Researchers explain that the discount rate can have a strong influence on the outcome of the analysis and note that higher discount rates tend to favor short-term projects and lower rates favor long-term projects. Thus, analysts should use care in choosing a discount rate that will not bias the outcome of the analysis and will accurately account for the benefits and costs expected in the future. Office of Management and Budget (OMB) provides guidance on choosing appropriate discount rates for different types of investments. After all benefits and costs have been discounted to present values, the analyst should evaluate the benefits and costs of each project using a common measure to allow for comparison across different alternatives. Net present value and benefit-cost ratio are two useful measures for project comparison. Net present value is the discounted sum of all benefits less the discounted sum of all costs associated with an alternative and is generally the preferred measure. If the net present value is positive, then the project is economically efficient in that the gainers from the project could potentially compensate those who incur costs and still benefit from the project. That is, the benefits throughout the life cycle of the project exceed the costs incurred in the same time frame. A benefit-cost ratio is the discounted sum of benefits divided by the discounted sum of costs. If the benefit-cost ratio is greater than one, benefits outweigh costs and the project is economically efficient. In essence, the benefit-cost ratio indicates whether $1 invested in one project earns a higher rate of return than $1 invested in a different project. Researchers and government agency guidance caution analysts to assign costs and benefits consistently when calculating benefit-cost ratios because inconsistency can result in incorrect comparisons between alternatives. For example, if maintenance costs are included in the cost component, the denominator of the fraction, for one project, but are netted out of the benefits, the numerator of the fraction, for a different project, the two benefit-cost ratios will not be comparable. Due to the inherent uncertainty in calculating the inputs to benefit-cost analysis, a critical element of investment analysis is assessing the sensitivity of the analysis to changes in the assumptions and forecasts. In addition, uncertainty can also affect the economically suggested choice of the project resulting in the greatest net benefit to society. Several methods, which vary in their complexity, exist for conducting sensitivity analysis including simple sensitivity analysis and Monte Carlo simulation. Simple sensitivity analysis involves recalculating the net present values or benefit-cost ratios after adjusting uncertain inputs to reflect alternative values, as well as the expected value typically used in the original analysis. Using this approach, the analyst can determine whether or not the alternative would still be economically efficient if the actual values were different from their predicted values. For example, transportation researchers widely accept that ridership forecasts for transit projects can be very uncertain. An analyst using simple sensitivity analysis can determine if the net present value of a transit alternative would still be positive even, if ridership in the future were lower than predicted. Monte Carlo simulation or probabilistic-based risk assessment is a more comprehensive and preferred approach to sensitivity analysis. With Monte Carlo simulation, the analyst assesses the probability distribution of each uncertain input and recalculates the benefit-cost analysis multiple times while drawing values that fall within the probability distribution for each of the uncertain inputs. The results are examined in the context of their probability distribution covering all potential outcomes of the analysis as well as reporting the average or other values. This approach allows the analyst to judge alternatives not only on their average net present value, given multiple possible input value combinations, but also on the likelihood that the project will achieve outcomes such as a positive net present value. Real options analysis incorporates uncertainty directly into benefit-cost valuation. It acknowledges and internalizes both the cost of making irreversible investments under uncertain conditions and the value of option-creating actions. This type of analysis incorporates timing of the decision as a factor rather than assuming investments are now or never decisions that cannot be delayed. In addition, real options analysis recognizes that a cost is associated with making decisions when the information that decision-makers use as a basis for the decision is uncertain and may change in the future. The analysis attempts to quantify the inherent opportunity cost of making an investment decision. In other words, real options analysis accounts for the lost opportunity to make a different decision at a later time when more or better information is available. For an investment to be advisable under real options analysis, the net present value of the investment must exceed the value of keeping the investment option alive until more certain information is available. While the real options approach is becoming more common in private sector investment decision-making, research suggests that this approach is not widely used in the public sector. Researchers have highlighted several ways that public sector transportation investment decision-makers could use real options analysis. First, decision-makers can use incremental planning and staged implementation of phases of projects to maintain the option to defer a decision and wait for new information or to terminate a partially-completed project if new information reveals that the investment is no longer beneficial to society. Decision-makers can also actively create flexible options by taking steps like acquiring a right-of-way but not building until more is known about the potential project, including demand conditions, potential costs, and expected benefits of alternatives. Finally, planners can use options to take incremental actions that increase learning. One study uses the case of San Diego’s conversion of a high- occupancy vehicle (HOV) lane to a high-occupancy toll (HOT) lane as an example of taking incremental action that increases learning. By using existing infrastructure and adding a pricing component, decision-makers tested users’ reactions to optional congestion pricing before implementing a congestion-pricing model that would affect all drivers. Finally, after the analysis has been completed and the results have been checked for sensitivity to uncertain inputs, analysts should use the results of the analysis to compare alternatives and identify the project that results in the greatest estimated net social benefit. As stated above, any project that has a positive net present value or benefit-cost ratio greater than one is expected to provide net benefits to society. However, transportation decision-makers have budget constraints and typically cannot implement all projects resulting in net benefits. Rather, they must rank alternatives and identify the best project that can be implemented given the budget constraint. In general, projects with higher net present values or benefit- cost ratios should be chosen over projects with lower net present values or benefit-cost ratios. If projects are not mutually exclusive, then a combination of projects, the total cost of which does not exceed the budget constraint, might lead to the greatest net social benefit. In this case, the decision-maker should examine all feasible combinations of projects, sum the net present values for each combination, and identify the combination that yields the highest total net present value. In addition, according to Executive Order 12893, OMB guidance, and our past research, in the likely event that not all benefits and costs could be quantified and monetized when developing the benefit-cost analysis, the decision-maker should consider the nonquantifiable factors in addition to the numeric results of the analysis when evaluating alternatives. In addition to those named above, Christine Bonham, Jay Cherlow, Robert Ciszewski, Lindy Coe-Juell, Sarah Eckenrod, Colin Fallon, Scott Farrow, Peter Guerrero, Libby Halperin, Hiroshi Ishikawa, Sara Ann Moessbauer, Stacey Thompson, and Dorothy Yee made key contributions to this report.
Passenger and freight traffic are expected to grow substantially in the future, generating additional congestion and requiring continued investment in the nation's surface transportation system. Over the past 12 years, the federal government has provided hundreds of billions of dollars for investment in surface transportation projects through the Intermodal Surface Transportation Efficiency Act of 1991 and its successor legislation, the Transportation Equity Act for the 21st Century. Reauthorization of this legislation is expected to provide hundreds of billions of dollars more in federal funding for surface transportation projects. For this investment to have the greatest positive effect, agencies at all levels of government need to select investments that yield the greatest benefits for a given level of cost. This report provides information about the processes that state and regional transportation decisionmakers use to analyze and select transportation infrastructure investments. GAO identified (1) key federal requirements for planning and deciding on such investments, (2) how benefit-cost analysis facilitates sound decisionmaking, and (3) other factors that decision-makers consider in evaluating and deciding on investments. Federal requirements specify the overall approach that states and regional organizations should use in planning transportation infrastructure projects, but generally do not specify what analytical tools planners should use to evaluate projects. These key requirements include developing strategic goals and objectives; considering a wide range of environmental and economic factors; preparing long- and short-range plans; and ensuring an inclusive process that involves many stakeholders. The Office of Management and Budget, the Department of Transportation (DOT), and GAO have identified benefit-cost analysis as a tool to help decision-makers identify projects with the greatest net benefits. The systematic process of benefit-cost analysis helps decision-makers organize information about, and determine trade-offs between, alternatives. Researchers also acknowledged challenges in applying benefit-cost analysis, including quantifying some benefits and costs, defining the scope of the project, and ensuring the precision of estimates used in the analysis. Ongoing research by DOT and others is aimed at improving and expanding state and regional decision-makers' application of benefit-cost analysis. Many of the transportation planners we interviewed said that factors other than the analyses developed during the planning process often influenced final investment decisions. For example, state and regional decision-makers must consider the structure of federal funding sources. Since federal funding often is tied to a single transportation mode, it may be difficult to finance projects that do not have dedicated funding, such as railroad improvement projects. In addition, decision-makers must ensure that wideranging public participation is reflected in their deliberations and that their choices take into account numerous views. In some cases, voter support through referenda is required before a project may proceed or financing can be secured. The physical constraints of an area may also affect investment choices. Difficulties in expanding capacity and limits on existing infrastructure may direct investments to preserving and maintaining existing facilities or improving operations. Finally, multistate transportation corridors present special challenges in coordinating investment decisions.
Before addressing these issues in detail, we would like to review two primary reasons why effective and efficient supply chain management is important for DOD. First, supply support to the warfighter affects readiness and military operations. In fact, the supply chain is a critical link in determining outcomes on the battlefield and can affect the military’s ability to meet national security goals. We previously reported on problems with supply distribution support in Iraq, including shortages of critical supply items and weaknesses in requirements forecasting, asset visibility, and distribution. DOD took steps to address such issues, for example, by establishing a joint deployment and distribution operations center to coordinate the flow of materiel into the theater. Second, given the high demand for goods and services to support ongoing U.S. military operations, the investment of resources in the supply chain is substantial. DOD spends billions of dollars to purchase, manage, store, track, and deliver supplies. It is particularly important that these substantial resources are effectively and efficiently invested in light of the nation’s current fiscal environment. In fact, the Secretary of Defense has recently stated that given the nation’s difficult economic circumstances and fiscal condition, DOD will need to reduce overhead costs and transfer those savings to force structure and modernization priorities. Congressional interest has likewise focused attention on areas within DOD’s logistics portfolio that are in need of improvement. One such area is inventory management. The Fiscal Year 2010 National Defense Authorization Act requires DOD to prepare a comprehensive plan for improving the inventory management systems of the military departments and the Defense Logistics Agency (DLA), with the objective of reducing the acquisition and storage of secondary inventory that is excess to requirements. We understand that DOD is finalizing the development of its comprehensive plan and expects to release that plan later this year. As noted earlier, DOD supply chain management has been designated by GAO as a high-risk area. GAO’s high-risk designation is intended to place special focus on programs and functions that need sustained management attention in order to resolve identified problems. We have reported that in order to successfully resolve supply chain management problems, DOD needs to sustain top leadership commitment and long-term institutional support for its strategic planning efforts for supply chain management, obtain necessary commitments for its initiatives from the military services and other DOD components, make substantial progress in implementing improvement initiatives and programs across the department, and demonstrate progress in achieving the objectives identified in supply chain management-related strategic planning documents. We have also encouraged DOD to develop an integrated, comprehensive plan for improving logistics. While we have previously noted progress DOD has made toward improving some aspects of supply chain management, demonstrating sustained improvement has been a continuing challenge due in part to a lack of outcome-oriented performance measures that are consistent across the department and that are linked to focus areas, such as requirements forecasting, asset visibility, and materiel distribution, and related initiatives. In addition, successful resolution of weaknesses in supply chain management depends on improvements in some of DOD’s other high-risk areas. For example, modernized business systems and the related investments in needed information technology are essential to the department’s effort to achieve total asset visibility, an important supply chain management issue. Regarding financial management, we have repeatedly reported that weaknesses in business management systems, processes, and internal controls not only adversely affect the reliability of reported financial data but also the management of DOD operations. Such weaknesses have adversely affected the ability of DOD to control costs, ensure basic accountability, anticipate future costs and claims on the budget, measure performance, maintain funds control, and prevent fraud. DOD’s new Logistics Strategic Plan is intended to support other recent strategic planning efforts in the department, including the completion of the 2010 Quadrennial Defense Review and the publication of the 2009 Strategic Management Plan. The Quadrennial Defense Review is a congressionally mandated report that provides a comprehensive examination of the national defense strategy, force structure, force modernization plans, infrastructure, budget plan, and other elements of defense programs and policies. The review is to occur every 4 years, with a view toward determining and expressing the nation’s defense strategy and establishing a defense program for the next 20 years. Also in response to legislative requirements, DOD issued the Strategic Management Plan in 2008 and updated it in 2009. The Strategic Management Plan serves as DOD’s strategy for improving its business operations, and describes the steps DOD will take to better integrate business with the department’s strategic planning and decision processes in order to manage performance. A key starting point in developing and implementing an effective results- oriented management framework is an agency’s strategic planning effort. Our prior work has shown that strategic planning is the foundation for defining what the agency seeks to accomplish, identifying the strategies it will use to achieve desired results, and then determining how well it succeeds in reaching results-oriented goals and achieving objectives. Developing a strategic plan can help clarify organizational priorities and unify the agency’s staff in the pursuit of shared goals. If done well, strategic planning is continuous, provides the foundation for the most important things the organization does each day, and fosters informed communication between the organization and its stakeholders. Combined with effective leadership, strategic planning provides decision makers with a framework to guide program efforts and the means to determine if these efforts are achieving the desired results. The Government Performance and Results Act (GPRA) and associated guidance from the Office of Management and Budget (OMB) require, among other things, that government agencies periodically develop agencywide strategic plans that contain certain necessary elements to be used by the agency and external stakeholders in decision making. Furthermore, recent OMB guidance concerning GPRA-related strategic plans stated that such a strategic plan should also provide sufficient context to explain why specific goals and strategies were chosen. The strategic planning requirements of GPRA and its implementation guidance generally only apply to agencywide strategic plans. While GPRA does not apply to DOD’s Logistics Strategic Plan, our prior work has identified many of GPRA’s requirements as the foundation for effective strategic planning. Our prior work has shown that organizations conducting strategic planning need to develop a comprehensive, results- oriented management framework to provide an approach whereby program effectiveness is measured in terms of outcomes or impact, rather than outputs, such as activities or processes. Such a framework includes critical elements such as a comprehensive mission statement, long-term goals, strategies to achieve the goals, use of measures to gauge progress, identification of key external factors that could affect the achievement of goals, a description of how program evaluations will be used, and stakeholder involvement in developing the plan. DOD internally has recognized the importance of these critical elements. For example, the Office of the Assistant Secretary of Defense for Logistics and Materiel Readiness directed each of the services to conduct strategic planning for depot maintenance and to submit plans that focus on achieving DOD’s strategy. The services were directed to include in their depot maintenance plans many of the same strategic planning elements just mentioned. In addition, we have reported that a strategic planning process should align lower-level goals and measures with departmentwide goals and measures, assign accountability for achieving results, be able to demonstrate results and provide a comprehensive view of performance, and link resource needs to performance. Further, such a strategic planning process and the resulting plan should set strategic direction, prioritize initiatives and resources, establish investment priorities and guide key resource decisions, and monitor progress through the establishment of performance goals and measures. Finally, we found in previous work that DOD’s prior strategic planning efforts for logistics lacked information necessary to be more useful tools for senior leaders, such as the inclusion of identified logistics problems, performance measures, and a method for integrating plans into existing decision-making processes. Over a number of years prior to the publication of its Logistics Strategic Plan, DOD issued a series of strategic planning documents for logistics and the management of its supply chain. These plans have differed in scope and focus, although they have typically included a number of high- level goals and related initiatives. For example, for a period of several years beginning in the mid-1990s, DOD issued a series of strategic plans for logistics. Later, the 2004 DOD Logistics Transformation Strategy attempted to reconcile several of DOD’s ongoing logistics approaches, namely focused logistics, force-centric logistics enterprise, and sense and respond logistics. In 2005, DOD issued the first iteration of its Supply Chain Management Improvement Plan to address some of the systemic weaknesses that were highlighted in our reports. That same year, DOD produced its Focused Logistics Roadmap, which catalogued current (“as is”) efforts and initiatives. Building on the “as is” Focused Logistics Roadmap, DOD recognized the need for a comprehensive, integrated strategy for transforming logistics and released its Logistics Roadmap in July 2008 to provide a more coherent and authoritative framework for logistics improvement efforts, including supply chain management. DOD indicated that the roadmap would be a “living” document and that future updates would incorporate new initiatives and programs, report progress toward achieving logistics capability performance targets, and help connect capability performance targets to current and planned logistics investment for an overarching view of DOD’s progress toward transforming logistics. The roadmap documented numerous initiatives and programs that were then under way and organized these around goals, joint capabilities, and objectives. However, we found that the roadmap was missing information that would make it more useful for DOD’s senior leaders. First, it did not identify the scope of DOD’s logistics problems or gaps in logistics capabilities. Second, it lacked outcome-based performance measures that would enable DOD to assess and track progress toward meeting stated goals and objectives. Third, DOD had not clearly stated how it intended to integrate the roadmap into DOD’s logistics decision-making processes or who within the department was responsible for this integration. A comprehensive, integrated strategy that includes these three elements is critical, in part, because of the diffuse organization of DOD logistics, which is spread across multiple DOD components with separate funding and management of logistics resources and systems. Moreover, we stated that without these elements, the roadmap would likely be of limited use to senior DOD decision makers as they sought to improve supply chain management and that DOD would have difficulty fully tracking progress toward meeting its goals. To address these weaknesses, we recommended that DOD include in future updates of its Logistics Roadmap the elements necessary to have a comprehensive, integrated strategy for improving logistics and to clearly state how this strategy would be used within existing decision-making processes. Specifically we recommended that DOD identify the scope of logistics problems and capability gaps to be addressed through the roadmap and associated efforts; develop, implement, and monitor outcome-focused performance measures to assess progress toward achieving the roadmap’s objectives and goals; and document specifically how the roadmap will be used within the department’s decision-making processes used to govern and fund logistics and who will be responsible for its implementation. DOD officials concurred with our recommendations and stated that they planned to remedy some of these weaknesses in their follow-on efforts to the roadmap. DOD officials subsequently stated that they had begun a series of assessments of the objectives included in the roadmap in order to identify capability gaps, shortfalls, and redundancies and to recommend solutions. As part of this assessment process, DOD officials stated that supply, maintenance, deployment, and distribution managers had been tasked with determining which specific outcome-oriented performance metrics could be linked to each of the objectives and goals within the roadmap in order to assess progress toward achieving desired results. DOD officials said that the results of these assessments would be included in the next version of the roadmap, which was scheduled for release in 2009. DOD further stated that a joint Executive Advisory Committee made up of senior leaders responsible for implementing logistics programs and initiatives had been established to guide the roadmap process to ensure that it is a useful tool in decision making. The 2010 Logistics Strategic Plan is DOD’s most recent effort to provide high-level strategic direction for future logistics improvement efforts, including those in the area of supply chain management. According to DOD officials, the plan serves as an update to the 2008 Logistics Roadmap. They further explained that the plan is an effort to identify the enduring and ongoing logistics efforts within the department and provide a good balance between the need for specificity and generality, without the level of detail included in the prior roadmap and with a broader scope than that provided in the Supply Chain Management Improvement Plan. The Logistics Strategic Plan articulates the department’s logistics mission and vision. The plan further states that to continue improving logistics support to the warfighter, it is essential that all elements of DOD’s logistics community take steps to better integrate logistics with strategic planning and decision processes and to manage logistics performance. To drive the department’s logistics enterprise toward that end, the plan includes goals, key initiatives, and some information on how DOD plans to track progress, including general performance measures. Through the inclusion of these elements, the plan provides unifying themes for improvement efforts. The Logistics Strategic Plan reiterates high-level department goals drawn from both the Quadrennial Defense Review and the Strategic Management Plan. For example, the Logistics Strategic Plan incorporates two of the Strategic Management Plan’s business priorities: support contingency business operations to enhance support to the deployed warfighter and reform the department’s acquisition and support processes. In addition, the Logistics Strategic Plan contains four logistics goals: Goal 1: Provide logistics support in accordance with warfighter requirements. Goal 2: Institutionalize operational contract support. Goal 3: Ensure supportability, maintainability, and costs are considered throughout the acquisition cycle. Goal 4: Improve supply chain processes, synchronizing from end-to-end and adopting challenging but achievable standards for each element of the supply chain. The plan lists 30 key initiatives related to the four logistics goals. According to a senior DOD official, the initiatives were selected based on the determination of officials within the Office of Assistant Secretary of Defense for Logistics and Materiel Readiness and were subsequently provided to the military services for review. In our review of the plan, we noted that key initiatives appear to focus on issues that we have identified as needing management attention. For example, our prior work on warfighter and logistics support in Iraq and Afghanistan has identified issues that directly relate to initiatives that support Goal 1—provide logistics support in accordance with warfighter requirements. We recently testified that DOD has taken steps to improve distribution of materiel to deployed forces in Afghanistan; however, we found several challenges that included difficulties with transporting cargo through neighboring countries and around Afghanistan, limited airfield infrastructure, and lack of full visibility over cargo movements. The Logistics Strategic Plan contains an initiative to facilitate logistics support for Afghanistan, including interagency coordination and development of transportation and distribution alternatives, as needed. In addition, our work has also raised concerns about the lack of risk assessments conducted for DOD’s Civil Reserve Air Fleet program, and DOD’s management of the program has not provided air carrier participants with a clear understanding of some critical areas of the program. DOD’s Logistics Strategic Plan includes a related initiative. With regard to Goal 2—institutionalize operational contract support—we have issued reports over a period of many years on progress and problems with contract support during contingency operations. We testified in March 2010 that DOD had taken steps to institutionalize operational contract support by appointing a focal point to lead efforts, issuing guidance, and beginning to determine its reliance on contractors; but we also identified ongoing challenges associated with contractor support. These challenges include inadequate oversight and management of contractors, providing training on how to work effectively with contractors during operations, ensuring proper screening of local and third-country nationals, compiling reliable data on the number of contractors supporting U.S. forces in contingencies, and identifying contractor requirements. Our prior work related to Goal 3—ensure supportability, maintainability, and costs are considered throughout the acquisition cycle—includes reviews of weapon system life cycle management, depot maintenance, and sustainment costs. For example, while we have noted that DOD has placed increased emphasis on life cycle management, we reported recently that DOD lacks key information on weapon system operating and support costs and therefore may not be well-equipped to analyze, manage, and ultimately reduce these costs. Although all four goals of the Logistics Strategic Plan have aspects relating to supply chain management, Goal 4 explicitly addresses the need to improve supply chain processes. DOD identifies four success indicators and three performance measures for this goal. The success indicators address both the efficiency and effectiveness of DOD’s supply chain management. For example, one success indicator states that enterprisewide solutions for the management of inventories and services will optimize total supply chain costs, and another states that effective demand planning will increase forecast accuracy and reduce costs. The performance measures, which are listed separately from the success indicators, include the percent of negotiated time definite delivery standards met globally (by combatant command), the percent of actual demand compared to forecasted demand, and number of days of customer wait time (time from submission of order to receipt of order) by lift area. The Logistics Strategic Plan lists 12 key initiatives that support Goal 4. The key initiatives focus on, among others issues, life cycle forecasting, the distribution process, automatic identification technology, and the department’s human capital strategy for logistics personnel. We have reported on some of these issues. For example, we reported in 2009 that DOD has taken steps to implement automatic identification technologies, such as item unique identification and passive radio frequency identification, to identify and track equipment and supplies, but has experienced difficulty in fully demonstrating return on investment to the military services responsible for implementation. The Logistics Strategic Plan also includes some information on how DOD plans to track progress. The plan lists success indicators and performance measures under each goal, and it states that the plan will be implemented by following the performance management framework found in the Strategic Management Plan. This framework contains six steps: plan, set targets, cascade measures, align processes, assess and report, and correct. By modeling the performance management framework of the Logistics Strategic Plan after that of the broader Strategic Management Plan, DOD officials expect that this alignment will naturally have a complementary, behavior-shaping influence on organizations subject to both plans. Although the Logistics Strategic Plan contains some key elements of an effective strategic plan and provides unifying themes for improvement efforts, it lacks detailed information regarding strategies and time frames that would help to specify how and when goals will be achieved. In our review of Goal 4, which focuses on supply chain processes, we found that detailed information was lacking in several areas, which may limit the plan’s usefulness as a tool for decision makers, including: Performance measurement information. While the plan presents three performance measures associated with Goal 4, it lacks baseline or trend data for past performance, measurable target-level information, or time frames for the achievement of goals or completion of initiatives. These are among the characteristics of successful performance measures that we have identified in our prior work. Such elements are needed to monitor the progress of implementation efforts and to determine how far DOD and its components must go to achieve success. In addition, there is not a clear linkage between the three measures and the success indicators or key initiatives under Goal 4. A senior DOD official stated that the performance measures in Goal 4 were included to present information about the overall functioning of the supply chain rather than specific improvement efforts. Key concepts. Some concepts in the plan express broad, positive ideas but are not fully defined. For example, Goal 4 states that processes should be “synchronized end-to-end,” and a success indicator states that supply chain costs should be “optimized.” The plan, however, does not define what aspects of the supply chain need further synchronization, how costs should be further optimized, or how DOD will gauge progress in these efforts. Problems and capability gaps. The plan does not include a discussion about overall departmentwide or DOD component-specific logistics problems or challenges, nor does it indicate the extent or severity of any identified capability gaps. Such information is necessary to establish a clear and common understanding of what problems and gaps the plan is trying to address. For example, the plan does not discuss logistics problems encountered during operations in Iraq and Afghanistan. We raised a similar concern about the 2008 Logistics Roadmap. Resource needs. The plan does not discuss resources needed to implement improvement efforts. As noted, an effective strategic planning process should be able to link resource needs to performance, prioritize initiatives and resources, establish investment priorities, and guide key resource decisions. In the absence of more detailed information in these areas, the usefulness of the Logistics Strategic Plan for decision making may be limited. Measuring performance, for example, allows for tracking progress toward goals and gives managers crucial information on which to base their decisions. In addition, if the plan included information on problems, capability gaps, and resource needs, managers could use the plan as a basis for establishing priorities for formulating, funding, and implementing corrective actions. DOD has recognized the need to include some of this information, and the plan states DOD’s intent to establish baseline performance and then measure that performance against interim targets through an annual assessment process. Although the Logistics Strategic Plan is linked to some broader strategic plans, it does not show explicit links with other strategic plans of supply chain or logistics defense components, and the link between that plan and some major logistics activities is not clear. These plans and activities could have a major role in shaping future logistics capabilities and functions. Some DOD components have issued their own strategic plans, but the linkages between the logistics-related issues in those plans and the Logistics Strategic Plan are not transparent. DOD states in the Logistics Strategic Plan that the combatant commands, military departments, and defense agencies should review and revise their respective strategic plans and associated goals, objectives, measures, and targets to reflect the Logistics Strategic Plan’s broader priorities. Moreover, DOD indicates that logistics leaders at the component level may find it necessary to realign operations and organizational structures to better integrate functional activities with larger end-to-end processes. However, the mechanism for ensuring that needed changes are made is not specified. Further, the plan does not reflect some activities and information that could affect supply chain management. For example, the military services have ongoing supply chain management improvement efforts under way; however, there is no explicit mention of these service-level efforts or goals, initiatives, or measures, even though the services have important responsibilities for carrying out logistics and supply chain functions. In addition, officials from various components stated that the Joint Supply Joint Integrating Concept, co-led by the Joint Staff and DLA, is a major ongoing effort. However, this concept is not discussed in the Logistics Strategic Plan. The purpose of this concept is to guide development and employment of future joint supply capabilities. It is not clear how the Logistics Strategic Plan will be used within the existing logistics governance framework to assist decision makers and influence resource decisions and priorities. For example, the plan states that the Joint Logistics Board and executive-level functional logistics governance bodies play critical roles in providing oversight and guidance to implementation of the Logistics Strategic Plan. While the Joint Logistics Board and other bodies may play critical roles in DOD’s supply chain management improvement efforts, their roles are not defined in the plan. In addition, the organizations responsible for key initiatives included in the plan are not identified. Similarly, the plan does not clearly define how oversight of plan implementation will occur. The plan briefly mentions the development of a Logistics Strategic Management Report that, along with a management dashboard of measures maintained by the Under Secretary of Defense for Acquisition, Technology and Logistics, will be used to report progress. However, the specific process or responsibilities for ensuring that corrective actions are taken in response to underperformance are not detailed in the plan. DOD officials stated that corrective actions are the responsibility of process or activity owners, while the responsibilities defined in the Logistics Strategic Plan include “implement corrective actions” as a responsibility of the Assistant Secretary of Defense for Logistics and Materiel Readiness. In its description of performance management, the plan states that accountable individuals will identify and implement corrections. Lastly, budget development is an important aspect of the existing governance framework, yet DOD has not shown how the plan will be used to help shape logistics budgets developed departmentwide or by individual components. In conclusion, strategic plans need to remain at a high enough level to provide a clear vision and direction for improvement, but without more specific information in the Logistics Strategic Plan, it will be difficult for DOD to demonstrate progress in addressing supply chain management problems and provide Congress with assurance that the DOD supply chain is fulfilling the department’s goal of providing cost-effective joint logistics support for the warfighter. Mr. Chairman, this concludes our prepared remarks. We would be happy to answer any questions you or other Members of the Subcommittee may have at this time. For further information regarding this testimony, please contact Jack E. Edwards at (202) 512-8246 or [email protected] or William M. Solis at (202) 512-8365 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making contributions to this testimony include Tom Gosling, Assistant Director; Jeffrey Heit; Suzanne Perkins; Pauline Reaves; and William Varettoni. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Defense's (DOD) management of its supply chain network is critical to supporting military forces in Iraq, Afghanistan, and elsewhere and also represents a substantial investment of resources. As a result of weaknesses in DOD's management of supply inventories and responsiveness to warfighter requirements, supply chain management is on GAO's list of high-risk federal government programs and operations. In July 2010, DOD issued a new Logistics Strategic Plan that represents the department's current vision and direction for supply chain management and other logistics areas. Today's testimony draws from GAO's prior related work and observations from an ongoing review of DOD supply chain management, and, as requested, will (1) describe DOD's prior strategic planning efforts in the area of logistics, (2) highlight key elements in the new Logistics Strategic Plan, and (3) discuss opportunities for improvement in future iterations of this plan. In conducting its ongoing audit work, GAO reviewed the Logistics Strategic Plan, compared elements in the plan with effective strategic planning practices, and met with cognizant officials from DOD, the military services, and other DOD components as appropriate. Prior to the publication of its new Logistics Strategic Plan, DOD issued a series of strategic planning documents for logistics over a period of several years. In 2008, DOD released its Logistics Roadmap to provide a more coherent and authoritative framework for logistics improvement efforts, including supply chain management. While the roadmap discussed numerous ongoing initiatives and programs that were organized around goals and joint capabilities, it fell short of providing a comprehensive, integrated strategy for logistics. GAO found, for example, that the roadmap did not identify gaps in logistics capabilities and that DOD had not clearly stated how the roadmap was integrated into DOD's logistics decision-making processes. GAO's prior work has shown that strategic planning is the foundation for defining what an agency seeks to accomplish, identifying the strategies it will use to achieve desired results, and then determining how well it succeeds in reaching results-oriented goals and achieving objectives. DOD said that it would remedy some of the weaknesses GAO identified in the roadmap. The July 2010 Logistics Strategic Plan, which updates the roadmap, is DOD's most recent effort to provide high-level strategic direction for future logistics improvement efforts, including those in the area of supply chain management. The plan provides unifying themes for improvement efforts, for example, by including a logistics mission statement and vision for the department, and it presents four goals for improvement efforts with supporting success indicators, key initiatives, and general performance measures. One goal focuses specifically on supply chain processes. The plan is aligned to and reiterates high-level departmentwide goals drawn from both the 2010 Quadrennial Defense Review and the 2009 Strategic Management Plan for business operations. Key initiatives in the plan appear to focus on issues that GAO has identified as needing management attention. While the Logistics Strategic Plan contains some of the elements necessary for strategic planning, it lacks some detailed information that would benefit decision makers and guide DOD's logistics and supply chain improvement efforts. The plan lacks specific and clear performance measurement information (such as baseline or trend data for past performance, measurable target-level information, or time frames for the achievement of goals or completion of initiatives), definition of key concepts, identification of problems and capability gaps, and discussion of resources needed to achieve goals. Further, linkages to other plans and some key related activities under way within logistics are unclear, and it is similarly unclear how the plan will be used within the existing governance framework for logistics. Without more specific information in the Logistics Strategic Plan, it will be difficult for DOD to demonstrate progress in addressing supply chain management problems and provide Congress with assurance that the DOD supply chain is fulfilling the department's goal of providing cost-effective joint logistics support for the warfighter.
CBP’s U.S. Border Patrol is the uniformed enforcement division responsible for border security between designated official ports of entry into the country. The Border Patrol reports that its priority mission is to prevent terrorists and terrorist weapons, including weapons of mass destruction, from entering the United States. In addition, the Border Patrol has a traditional mission of preventing illegal aliens, smugglers, narcotics, and other contraband from crossing the border between the ports of entry. To carry out its missions, the Border Patrol had a budget of $3.5 billion in fiscal year 2009 to establish and maintain operational control of the U.S. border. As of June 2009, the Border Patrol had 19,354 agents nationwide, an increase of 57 percent since September 2006. Of these agents, about 88 percent (17,011) were located in the nine Border Patrol sectors along the southwest border. About 4 percent of the Border Patrol’s agents in these sectors were assigned to traffic checkpoints, according to the Border Patrol. Despite efforts to enhance border security in recent years, DHS reports show that significant illegal activity continues to cross the border undetected. At the ports of entry, CBP has both increased training for agents and enhanced technology. However, the DHS Annual Performance Report for fiscal years 2008-2010 sets a goal for detecting and apprehending about 30 percent of major illegal activity at ports of entry in 2009, indicating that 70 percent of criminals and contraband may pass through the ports and continue on interstates and major roads to the interior of the United States. Between the ports of entry, CBP is implementing the Secure Border Initiative (SBI), a multiyear, multibillion- dollar program aimed at securing U.S. borders and reducing illegal immigration through a comprehensive border protection system. Along the southwest border, overall Border Patrol apprehensions of illegal aliens have declined over the past 3 years, from nearly 1.1 million in fiscal year 2006, to 705,000 in fiscal year 2008. This decreasing pattern was reflected in all sectors except San Diego, which showed a steady increase across these years, as shown in figure 1. The Tucson sector continues to have the largest number of apprehensions compared to other sectors along the southwest border, as shown in figure 1. Border Patrol officials stated that targeted enforcement efforts in other Border Patrol sectors in previous years caused a shift in illegal cross- border activity to the Tucson sector. Checkpoints are the third layer in the Border Patrol’s three-tiered border enforcement strategy. The other two layers are located at or near the border, and consist of line watch and roving patrol. According to the Border Patrol, the majority of Border Patrol agents are assigned to line watch operations at the border, where they maintain a high profile and are responsible for deterring, turning back, or arresting anyone they encounter attempting to illegally cross the border into the United States. Roving patrol operations consist of smaller contingents of agents deployed behind the line watch to detect and arrest those making it past the first layer of defense in areas away from the immediate border. Traffic checkpoints are located on major U.S. highways and secondary roads, usually 25 to 100 miles inland from the border. This permits them to be far enough inland to detect and apprehend illegal aliens, smugglers, and potential terrorists attempting to travel farther into the interior of the United States after evading detection at the border, but are close enough to the border to potentially control access to major population centers. The Border Patrol operates two types of checkpoints—permanent and tactical—that differ in terms of size, infrastructure, and location. While both types of checkpoints are generally operated at fixed locations, permanent checkpoints—as their name suggests—are characterized by their bricks and mortar structure, that may include off-highway covered lanes for vehicle inspection, and several buildings including those for administration, detention of persons suspected of smuggling or other illegal activity, and kennels for canines used in the inspection process (see fig. 2). Permanent checkpoints are equipped with technology and computers connected to national law enforcement databases to enhance the ability of agents to identify suspects, research criminal histories, and cross-check terrorist watch lists. Permanent checkpoints generally have electricity, communication towers, and permanent lighting to enhance operations at night and in poor weather conditions. These facilities also offer greater physical safety to agents and the public—particularly when they are located off-highway—by virtue of protective concrete barriers separating agents from vehicle traffic, and better signage and lighting. Permanent checkpoints also have assets to help lessen the chance that illegal aliens and smugglers will be able to successfully bypass the checkpoint to avoid detection. These assets include remote video surveillance, electronic sensors, and agent patrols in the vicinity of the checkpoints, which may also include horse patrols and all-terrain vehicles. There are 32 permanent checkpoints along the southwest border, in eight of the nine Border Patrol sectors, as shown in figure 3. Of the nine sectors, only the Tucson sector does not have permanent checkpoints, instead operating tactical checkpoints. Tactical checkpoints are also operated at a fixed location but do not have permanent buildings or facilities, as shown in figure 4. One of the intents of tactical checkpoints is to support permanent checkpoints by monitoring and inspecting traffic on secondary roads that the Border Patrol determined are likely to be used by illegal aliens or smugglers to evade apprehension at permanent checkpoints. Tactical checkpoint infrastructure may consist of a few Border Patrol vehicles, used by agents to drive to the location; orange cones to slow down and direct traffic; portable water supply; a cage for canines (if deployed at the checkpoint); portable rest facilities; and warning signs. In general, tactical checkpoints are intended to be set up for short-term or intermittent use, and open and close based on intelligence on changing patterns of smuggling and routes used by illegal aliens. As a result, the number of tactical checkpoints in operation can change on a daily basis. Thirty-nine tactical checkpoints were operational at some point in fiscal year 2008 on the southwest border. Border Patrol agents at checkpoints have legal authority that agents do not have when patrolling areas away from the border. The United States Supreme Court ruled that Border Patrol agents may stop a vehicle at fixed checkpoints for brief questioning of its occupants even if there is no reason to believe that the particular vehicle contains illegal aliens. The Court further held that Border Patrol agents “have wide discretion” to refer motorists selectively to a secondary inspection area for additional brief questioning. In contrast, the Supreme Court held that Border Patrol agents on roving patrol may stop a vehicle only if they have reasonable suspicion that the vehicle contains aliens who may be illegally in the United States—a higher threshold for stopping and questioning motorists than at checkpoints. The constitutional threshold for searching a vehicle is the same, however, and must be supported by either consent or probable cause, whether in the context of a roving patrol or a checkpoint search. The Tucson sector is the only sector along the southwest border without permanent checkpoints. Although other sectors along the southwest border deploy a combination of permanent and tactical checkpoints, the Tucson sector has only tactical checkpoints that operate from fixed locations. Legislation effectively prohibited the construction of permanent checkpoints in the Tucson sector, beginning in fiscal year 1999. Specifically, the Omnibus Consolidated and Emergency Supplemental Appropriations Act, 1999, stated that “no funds shall be available for the site acquisition, design, or construction of any Border Patrol checkpoint in the Tucson sector.” The effect of this legislative language was that no permanent checkpoints could be planned or constructed in this sector, which had no permanent checkpoints when the prohibition took effect. Subsequent appropriations acts carried this construction prohibition forward through fiscal year 2006. Furthermore, during fiscal years 2003 through 2006, the Border Patrol was subject to an additional appropriations restriction that required it to relocate checkpoints in the Tucson sector on a regular basis. Beginning in fiscal year 2007, the appropriations restrictions that applied to checkpoints in the Tucson sector did not appear in DHS’s annual appropriations acts. In response, the Border Patrol fixed the position of the I-19 checkpoint at kilometer post (KP) 42, near Amado, Arizona. Although the I-19 checkpoint has been operating since November 2006 at this fixed location, the checkpoint lacks permanent infrastructure and the associated benefits. For example, the Border Patrol does not have the facilities to detain apprehended illegal aliens at or near the checkpoint or the access to national databases to determine whether apprehended individuals are wanted criminals or potential terrorists. The facility also lacks protective concrete barriers separating agents from vehicle traffic and a canopy to protect agents and canines from exposure to the elements while conducting inspections, as shown in figure 5. The Border Patrol has developed plans to construct a permanent checkpoint on I-19, but the House Committee on Appropriations instructed the Border Patrol to first take some interim steps. Specifically, in the House report accompanying DHS’s appropriations bill for fiscal year 2009, the committee instructed the Border Patrol not to finalize planning for the design and location of a permanent checkpoint on I-19 until it first establishes and evaluates the effectiveness of an upgraded interim checkpoint. According to Border Patrol officials, the upgraded interim checkpoint will have a canopy, a third inspection lane, and an expanded secondary inspection area, among other improvements. In addition, the committee also told the Border Patrol to consider the findings from this GAO study in its planning efforts. The Border Patrol expects the upgraded interim checkpoint to be completed by May 2010. Tucson sector officials estimate that constructing the upgraded interim checkpoint will cost approximately $1.5 million and constructing the permanent I-19 checkpoint will cost approximately $25 million. Checkpoint operations have contributed to furthering the Border Patrol’s mission to protect the border, and have also contributed to protection efforts of other federal, state, and local law enforcement agencies. However, Border Patrol officials have stated that additional canines, non- intrusive inspection technology, and staff are needed to increase checkpoint effectiveness. Border Patrol officials stated that they are taking steps to increase these resources at checkpoints across the southwest border. Checkpoints contribute to the Border Patrol’s mission to protect the nation from the impact of contraband illegally transported across the border, as well as the impact of illegal aliens, some of whom may have ties to organized crime or countries at higher risk of having groups that sponsor terrorism. Border Patrol data show that checkpoints assisted federal efforts to disrupt the supply of illegal drugs. In fiscal year 2008, over 3,500 of the almost 10,100 drug seizures by the Border Patrol along the southwest border occurred at checkpoints. With a relatively small allocation of agents—about 4 percent, according to Border Patrol officials— checkpoints accounted for about 35 percent of Border Patrol drug seizures along the southwest border. Checkpoint seizures included various types of illegal drugs. For example, the Tucson sector checkpoint on I-19 seized 3,200 pounds of marijuana, with an estimated street value of $2.6 million, in a single event in June 2009. Additionally, the Laredo sector checkpoint on I-35 seized almost 240 pounds of cocaine with an estimated street value of $7.6 million in a single event in March 2009. Overall, the number of drug seizures at southwest border checkpoints increased slightly from 3,460 in fiscal year 2007 to 3,540 in fiscal year 2008 (an increase of about 2 percent), while total Border Patrol seizures decreased slightly, from 10,285 to 10,065 (a decrease of about 4 percent). In two sectors, however, seizures at checkpoints increased substantially, as shown in figure 6. Specifically, drug seizures at San Diego sector checkpoints increased by 93 percent from fiscal year 2007 to 2008, while drug seizures at Yuma sector checkpoints increased by 73 percent. Yuma sector checkpoints also had more than twice the number of seizures compared to other individual sectors. According to San Diego sector officials, the increase in seizures at San Diego sector checkpoints can be attributed to a number of factors, including a 78 percent increase in the operational hours of sector checkpoints, a 123 percent increase in sector manpower, utilizing an additional inspection lane during peak traffic times at the checkpoint on I-8, rather than allowing traffic to pass without inspection, and increased infrastructure (fencing, light poles, remote video surveillance system) in the western corridor of the sector may have pushed traffic east towards the sector checkpoints. Yuma sector officials attributed the increase in Yuma sector checkpoint seizures to factors including increases in tactical infrastructure and technology at the border, which have allowed the sector to move more agents and canines to sector checkpoints. Checkpoints have also contributed to apprehensions of illegal aliens. Nearly 17,000 illegal aliens were apprehended at checkpoints, or 2 percent of the more than 705,000 total Border Patrol apprehensions along the southwest border in fiscal year 2008. Checkpoint apprehensions ranged from single individuals to large parties of illegal aliens led by “coyotes.” For example, we observed the apprehension of an illegal alien at a San Diego sector checkpoint who was hidden beneath the trunk floor of a passenger vehicle during our visit to the San Diego sector in October 2008. More recently, the Laredo sector checkpoint on I-35 found 13 illegal aliens concealed in a tractor-trailer trying to traverse the checkpoint in a single event in April 2009. The illegal aliens and the driver of the tractor-trailer were processed for prosecution. Overall, apprehensions at checkpoints decreased from fiscal year 2007 to 2008, and at a greater rate than for other Border Patrol activities. During this time frame, the number of apprehensions at all southwest Border Patrol checkpoints decreased by 26 percent (from 22,792 to 16,959), while apprehensions for other Border Patrol activities along the southwest border decreased by 18 percent (from 858,638 to 705,005). In one sector, however, checkpoint apprehensions increased from fiscal year 2007 to 2008, as shown in figure 7. Tucson sector checkpoint apprehensions increased by 28 percent from fiscal year 2007 to 2008, although the total number of checkpoint apprehensions remained higher in the San Diego, Laredo, and Rio Grande Valley sectors. Border Patrol officials stated that Tucson sector checkpoint apprehensions increased because the sector maintained nearly full-time operations at all sector checkpoints during fiscal year 2008. Additionally, the Border Patrol increased the number of operational checkpoints in the sector from 10 in fiscal year 2007 to 13 in fiscal year 2008. Border Patrol officials said that apprehensions decreased in other sectors in part due to the deterrent effect of increased Border Patrol presence and infrastructure, and initiatives to criminally prosecute illegal aliens. For example, Laredo sector officials said that checkpoint apprehensions decreased by nearly half from fiscal year 2007 to 2008 due to the following contributing factors: Increased staff. The number of Border Patrol agents in the Laredo sector increased from approximately 1,200 agents in fiscal year 2007 to approximately 1,636 agents in fiscal year 2008. In addition, Operation Jump Start, which ended in July 2008, provided 286 National Guard soldiers to support Border Patrol operations in the sector, with approximately 36 deployed to support checkpoint operations. These soldiers were placed in areas highly visible to the checkpoints which, along with increased Border Patrol agents, created a deterrent to illegal activity. Improved infrastructure and technology. Deterrence and detection capabilities increased in the Laredo sector in terms of improved traffic checkpoint technology, cameras, license plate readers, and vehicle and cargo inspection systems (VACIS). In addition, fiscal year 2007 was the first full fiscal year in which the new state-of-the-art checkpoint on I-35 was operational. Border Patrol officials believe that human and narcotics smugglers rerouted their cargo to other locations due to the deterrent effect of the new checkpoint. Increased prosecutions. At the beginning of fiscal year 2008, Laredo sector implemented a prosecution initiative—known as Operation Streamline—to prosecute and remove all violators charged with illegal entry in targeted areas in the sector. Although sector checkpoints were not in these targeted areas, sector officials reported that this zero tolerance policy resulted in a higher prosecution rate in fiscal year 2008, providing a deterrent to illegal aliens across the sector. Checkpoints also help screen for individuals who may have ties to terrorism. CBP reported that in fiscal year 2008, there were three individuals encountered by the Border Patrol at southwest border checkpoints who were identified as persons linked to terrorism. In addition, the Border Patrol reported that in fiscal year 2008 checkpoints encountered 530 aliens from special interest countries, which are countries the Department of State has determined to represent a potential terrorist threat to the United States. While people from these countries may not have any ties to illegal or terrorist activities, Border Patrol agents detain aliens from special interest countries if they are in the United States illegally and Border Patrol agents report these encounters to the local Sector Intelligence Agent, the Federal Bureau of Investigation (FBI) Joint Terrorism Task Force, U.S. Immigration and Customs Enforcement (ICE) Office of Investigations, and the CBP National Targeting Center. For example, according to a Border Patrol official in the El Paso sector, a checkpoint stopped a vehicle and questioned its three Iranian occupants, determining that one of those occupants was in the United States illegally. The individual was detained and turned over to U.S. Immigration and Customs Enforcement for further questioning. Federal, state, and local law enforcement officials from the five sectors we visited told us that Border Patrol checkpoints enhance their operations and mission achievement. For example, federal Drug Enforcement Administration (DEA) officials stated that in addition to individual drug seizures, checkpoints supported DEA goals to disrupt and dismantle drug smuggling operations by gathering intelligence from captured drug smugglers turned over to DEA, helping to identify patterns in smugglers’ routes of ingress to the United States, and increasing smuggling costs by forcing the use of increasingly sophisticated methods of concealment to evade detection. Checkpoints provided benefits to state and local law enforcement officials, including the identification and detention of criminals who were attempting to evade arrest by state highway patrol, city police, or county sheriffs, and providing other services in rural areas with sparse law enforcement presence. For example, Border Patrol agents at the I-5 checkpoint in San Clemente, California, referred a vehicle with two men to secondary inspection because the men were acting suspiciously. Upon inspection, agents found a small quantity of marijuana and methamphetamine, a large quantity of cash, and a handwritten demand note. The men and evidence were turned over to the local sheriff who determined that the men had robbed a local pharmacy and were primary suspects in another armed robbery. In terms of other services, several state and local law enforcement officials we met with said that checkpoint personnel could respond more quickly to highway accidents and provide access to detention facilities for transfer of illegal aliens captured by local authorities. For example, a sheriff responsible for law enforcement near the U.S. Route 77 checkpoint in Border Patrol’s Rio Grande Valley sector reported that the Border Patrol regularly provides assistance and backup to his office, such as responding to highway accidents or other incidents, because he often has only one deputy on duty to cover a large geographic area. Additionally, this same sheriff reported that if he apprehends an illegal alien, he turns the person over to the Border Patrol agents at the nearby checkpoint for processing and detention. Border Patrol guidance and officials from five sectors we visited identified operational requirements and resources that are important for effective and efficient checkpoint performance, including (1) continuous operation, (2) full-time canine inspection capability, (3) non-intrusive inspection technology, and (4) number and experience of checkpoint staff. While most permanent checkpoints were operational nearly 24 hours per day in fiscal year 2008, Border Patrol officials have stated that additional canines, non-intrusive inspection technology, and staff are needed to increase checkpoint effectiveness. According to the Border Patrol, operating checkpoints continuously—that is, 24 hours a day, 7 days a week—is key to effective and efficient checkpoint performance. Keeping checkpoints operational is important because smugglers and illegal aliens closely monitor potential transit routes and adjust their plans to ensure the greatest chance of success. For example, a 1995 study of checkpoint operations in the San Diego sector by the former U.S. Immigration and Naturalization Service showed that when the checkpoint on I-5 was closed, apprehensions at the nearby and operational I-15 checkpoint fell sharply—there was a 50 percent decline in 1 month. According to the study, this decline resulted from illegal aliens choosing to travel through the closed checkpoint on I-5 instead of the operational checkpoint on I-15. Recent testimony before Congress by the Arizona Attorney General discussed the sophisticated surveillance and communication technology currently used by smugglers. Such technology could allow for immediate notification of security vulnerabilities, such as a checkpoint closure. Tucson sector Border Patrol officials and the Assistant Special Agent in Charge from DEA’s Tucson District Office explained that smugglers of humans and drugs, often sponsored by organized crime, store loads of people or drugs in “stash houses” after illegally crossing the border until transit routes are clear. As soon as a checkpoint is closed, the people or drugs in the stash houses are moved through the checkpoint. Border Patrol data showed that in fiscal year 2008 most of the 32 permanent checkpoints were near continuous operation, with 25 having operated 22 hours or more, and 3 having operated between 20 and 22 hours per day, on average. Those operated most frequently include permanent checkpoints located off highway with enhanced weather infrastructure in place. For example, the U.S. Route 77 checkpoint in Border Patrol’s Rio Grande Valley sector was operational almost 24 hours per day on average in fiscal year 2008, closing only for a total of 22 hours because of inclement weather related to Hurricane Dolly. The remaining four permanent checkpoints were operational less than 7 hours per day on average in fiscal year 2008. These included two checkpoints with on-highway inspection lanes that were located in high traffic areas and two checkpoints that were no longer used because they were relocated to other locations. For example, the I-5 and I-15 checkpoints in the San Diego sector have on-highway inspection lanes, as shown in figure 8, and the high traffic volume passing through these checkpoints overwhelms the capability to perform checkpoint inspections more than 2 hours per day, on average, without causing significant traffic congestion and safety concerns. The I-8 checkpoint in Yuma sector was relocated as a new tactical checkpoint 60 miles east of the location where the former permanent checkpoint was located, due to encroachment of developers and increasing freeway traffic. Finally, the Oak Grove checkpoint in the San Diego sector was operational for only 26 hours in fiscal year 2008 because checkpoint operations were shifted from the Oak Grove checkpoint to other checkpoints farther east, as well as roving patrols, to increase enforcement in those targeted areas, according to sector officials. Border Patrol data also showed that in general tactical checkpoints are operated much less frequently than permanent checkpoints, a median of less than 2 hours per day for tactical checkpoints compared to a median of over 23 hours per day for permanent checkpoints. Border Patrol officials said that safety conditions and staff shortages were the primary reasons for closure. Tactical checkpoints, which generally consist of trailers and generators, are more vulnerable to adverse weather conditions than permanent structures, and may be lower in priority for staffing during times of low traffic volume. In addition, Border Patrol headquarters officials said that differences in operational hours for tactical checkpoints across sectors can occur because of the operational decisions of each sector’s Chief Patrol Agent based on information on smuggling trends and available staffing to address those trends. Border Patrol checkpoint policy states that full-time canine presence at checkpoints is important for the effective and efficient inspection of vehicles and cargo for illegal drugs and persons, but the manager of Border Patrol’s canine program noted that in general there is not a sufficient level of canines at checkpoints. According to Border Patrol officials, smugglers have become increasingly sophisticated in the design of concealed compartments that agents would find difficult or impossible to detect without canine assistance. Often, canines alerting to the presence of illegal drugs or hidden persons may provide Border Patrol agents the only source of probable cause to search a vehicle or its occupants, according to Border Patrol officials. (See fig. 9) Border Patrol officials said there were not enough canines for full-time checkpoint coverage, even in sectors with the most heavily used smuggling corridors. In the Tucson sector, for example, sector officials said that as of July 15, 2009 they have 99 canine teams, but 120 teams would ensure availability when officers are not available for duty due to leave, training, or supporting other law enforcement agencies. Border Patrol’s canine program manager said that the Border Patrol expected to train 180 canines in fiscal year 2009 and will send a majority of these canines to southwest border sectors to address gaps in canine coverage at checkpoints. In fiscal year 2010, the Border Patrol plans to expand its canine facility to facilitate training and hopes to train an additional 250-300 canines. However, the program manager noted that additional trained canines will not alleviate the Border Patrol’s immediate need for these assets as many of the trained canines will replace older canines that will be retiring. The program manager stated that while the Border Patrol does not have the resources to address the need for canines in the near term, the agency plans to train 1,500 canines by fiscal year 2014 which, including canine retirement and replacement, will result in 1,300 deployed canines across all Border Patrol activities, including checkpoints. The Border Patrol has identified the deployment of non-intrusive inspection technologies that allow the inspection of hidden or closed compartments—in particular, the ability to find contraband and other security threats—as one of its high-priority needs to improve checkpoint performance. Non-intrusive inspection technologies, such as a VACIS or backscatter X-ray machine, as shown in figure 10, use imaging to help trained operators see the contents of closed vehicles and containers, which helps them to intercept a broad array of drugs, other contraband, illegal aliens, or other items of interest without having to search physically. Border Patrol officials told us that they have seen smugglers using increasingly complex concealment methods at checkpoints, emphasizing the importance of deploying new detection technologies to counter these threats. For example, Tucson sector officials reported that within 1 month of deployment of a backscatter machine at a sector checkpoint, they identified 30 hidden compartments in vehicles being used to smuggle illegal drugs. Border Patrol officials said that backscatter machines have been of great value to checkpoint officials for discovering hidden compartments. As of May 2009, the Border Patrol reported that it had eight mobile non- intrusive inspection technologies, such as a VACIS or backscatter machine, deployed to support Border Patrol operations in the nine southwest border sectors. Of these eight non-intrusive inspection technologies, four were dedicated to specific checkpoints and four were deployed to sectors and were moved among checkpoints or other locations as deemed necessary by the sector’s Chief Patrol Agent. The Border Patrol reported that the agency is in the process of acquiring additional mobile non-intrusive inspection equipment for southwest border checkpoints. Once these units are acquired, the Border Patrol intends to develop a plan to prioritize the deployment of these units among checkpoints. Border Patrol officials are of the opinion that mobile backscatter units are cheaper to obtain and maintain than VACIS units, require fewer dedicated staff, produce images that are easier for Border Patrol agents to interpret, and do not require an environmental assessment to be completed prior to deployment. Despite tentative plans to deploy additional non-intrusive inspection technologies at checkpoints, resource constraints may preclude or delay acquisition and deployment. Both VACIS and backscatter units require a large concrete apron and trained operators for effective operation, and some checkpoints lack adequate space or available staff. For example, at one checkpoint which has a VACIS unit, reportedly only 4 of the 12 agents originally trained to operate the VACIS remain because of attrition, decreasing the amount of time the VACIS can be used to screen vehicles. Border Patrol sector officials said that it can be difficult getting agents to volunteer for VACIS training, as other Border Patrol duties are preferable. Furthermore, officials responsible for the current checkpoint on I-19 south of Tucson, Arizona, reported that more space is needed to improve the effectiveness of the backscatter unit, as the unit requires an off-road area sufficient to permit its safe operation without interfering with traffic flow. Checkpoint performance can also be hindered by limited staffing at checkpoints. Border Patrol policy recommends the minimum number of agents for checkpoint operation, but sector managers may have other priorities for staff placement. Despite the rapid increase in overall staffing numbers on the southwest border, the number of agents remains insufficient to fully staff all areas of need, according to Border Patrol officials. As a result, sector chiefs have developed strategies that prioritize areas within the sector for achieving operational control. Priority areas differ among sectors, but generally include the immediate border area and urban centers, rather than checkpoints. For example, in the Tucson sector, the Border Patrol deploys about 8 percent of sector operational agents to sector checkpoints on an average day, according to sector officials. Tucson officials we met with stated that they would like to deploy additional staff to the checkpoint, but no additional agents were available, as the majority of agents are staffed to border areas, which are sector priority areas. According to Border Patrol officials, checkpoint staffing numbers should increase as the Border Patrol continues to hire new agents. Checkpoint performance can also be hindered when assigned staff are new and do not have experience gained by continuous on-the-job training or do not have the desire to work at checkpoints. Border Patrol officials stated that nearly half of all agents have less than 2 years of experience, and Border Patrol officials in some sectors stated that agents generally do not consider checkpoint duty to be a desirable assignment. As such, checkpoints may be staffed on a rotational basis. These problems are minimized in locations where Border Patrol stations have operational responsibilities for checkpoints only. For example, agents at five checkpoints in the El Paso sector are generally staffed to the checkpoint or checkpoint circumvention routes on a fairly continuous basis. In contrast, Tucson sector agents rotate checkpoint duty with roving patrol and other enforcement activities, such as line watch, and may serve at the checkpoint at least once every 14 days, according to sector officials. The Border Patrol established a number of measures for checkpoint performance to inform the public on program results and provide management oversight; however, information gaps and reporting issues have hindered public accountability, and inconsistent data collection and entry have hindered management’s ability to monitor the need for program improvement. The Border Patrol chose 3 of 21 performance measures identified by a working group in 2006 to begin reporting the results of checkpoint operations under the Government Performance and Results Act of 1993 (GPRA). Under GPRA, agencies are required to hold programs accountable to Congress and the public by establishing performance goals, identifying performance measures used to indicate progress toward meeting the goals, and use the results to improve performance as necessary. Agencies report their program goals, measures, results, and corrective actions to the public each year in their Performance and Accountability Report (PAR). The Border Patrol first reported the checkpoint performance results for these three measures in CBP’s fiscal year 2007 PAR. The three GPRA measures used for public reporting relate to (1) checkpoint drug seizures as a percentage of all Border Patrol seizures, (2) checkpoint apprehensions as a percentage of all Border Patrol apprehensions, and (3) the percentage of checkpoint apprehensions that are referred to a U.S. Attorney for criminal prosecution. These measures were chosen as contributing directly to the DHS goals to protect the nation from dangerous persons and contraband, and were recommended as GPRA measures in a 2007 study commissioned by CBP. The remaining 18 measures identified by the working group collectively provide some indication of checkpoint performance, but individually provide more indirect support of border security goals. For example, the working group identified separate measures for comparing the number of apprehensions and seizures at checkpoints to those on circumvention routes and the number of seizures or apprehensions at checkpoints that involved methods of concealment to smuggle persons or contraband. Information gaps preclude using the performance measures to determine the full extent of a checkpoint’s effectiveness relative to other checkpoints and Border Patrol strategies for protecting the nation from illegal aliens and contraband. According to GPRA guidance, measures should reflect program outcomes and provide information to assess accomplishments, make decisions, realign processes, and assign accountability. Studies commissioned by CBP, however, have documented that measures of the number of seizures or apprehensions bear little relationship to effectiveness because they do not compare these numbers to the amount of illegal activity that passes through undetected. In the absence of this information, the Border Patrol does not know whether seizure and apprehension rates at checkpoints are low or high, and if lower rates are due to ineffective performance, effective deterrence, or a low volume of illegal drugs or aliens passing through a checkpoint. As a result, the Border Patrol is unable to use these measures to determine if one checkpoint is performing more effectively or efficiently than another checkpoint, or how effective the checkpoint strategy is compared to strategies placing agents at the border or other locations. Border Patrol headquarters officials said that they do not use the measures as management indicators of checkpoint performance specifically, although officials do use the results along with other information for oversight of overall border strategy. CBP has not developed models to address these information gaps for checkpoints, but has done so for other aspects of its border security strategy. Identifying the extent of illegal activity that occurs is a challenge faced by law enforcement agencies, but in some cases CBP uses programs and models specific to certain operations that estimate illegal activity levels based on various factors. For example, CBP uses a program, known as Compliance Examination (COMPEX), which estimates the total amount of illegal activity passing undetected through official U.S. ports of entry. Developed under the former U.S. Customs Service, COMPEX randomly selects travelers entering the country for more detailed inspections. On the basis of the extent to which violations are found in the in-depth inspections, CBP estimates the total number of inadmissible aliens and other violators who seek to enter the country. CBP then calculates an apprehension rate by comparing the number of violators it actually apprehends with the estimated number of violators that attempted entry, and reports these results in DHS’s annual performance report to provide program accountability. Other efforts included models to estimate the probability of apprehension by sector and an estimate of the number of illegal border crossings across the southwest border, and estimates of undetected illegal activity passing across smaller geographic zones. Border Patrol officials reported that they are exploring the feasibility of developing a checkpoint performance model to address checkpoint operational effectiveness and checkpoint impact on overall border security. Although standard practices in program management call for documenting milestones to ensure results are achieved, the Border Patrol did not identify time lines or milestones for completing this effort. Doing so could help provide the Border Patrol with reasonable assurance that its personnel will determine the feasibility of developing a checkpoint performance model within a time frame authorized by management. Reporting issues at Border Patrol headquarters also hindered using the performance measure results to inform Congress and the public on checkpoint performance. The Border Patrol began annual reporting on the three GPRA measures of checkpoint performance in the CBP fiscal year 2007 PAR, but the information reported was inaccurate, resulting in an overstatement of checkpoint performance for both fiscal years 2007 and 2008, as shown in table 1. Annual Performance and Accountability Reports are to document the results agencies have achieved compared to the goals they established, which, as we have previously reported, is key to improving accountability for results as Congress intended under GPRA. We used Border Patrol data to calculate results for the three checkpoint measures for fiscal years 2007 and 2008 and compared these numbers to results the Border Patrol reported in the PARs. Our analysis showed that the actual checkpoint performance results were incorrectly reported for two of the three measures in fiscal year 2007 and for one measure in fiscal year 2008. As a result, the Border Patrol incorrectly reported that it met its checkpoint performance targets for these two measures. The results of our analysis differed from those reported in the PARs for several reasons. In regard to errors in reporting apprehensions, the Border Patrol reported that Tucson sector data were excluded because including such data would unfairly reflect on overall checkpoint performance, as the Tucson sector has a substantially higher volume of illegal aliens compared to other sectors. According to the Border Patrol, disclosure statements explaining the exclusion of Tucson sector data were inadvertently omitted from the fiscal year 2007 PAR, and that full disclosure would be presented in future reports. In regard to errors in reporting the number of checkpoint cases referred to a U.S. Attorney for criminal prosecution, reported data were overstated because they included referrals to all prosecuting authorities—federal, state, and local. Including only those referrals to a U.S. Attorney, as defined in the PAR, would reduce reported performance results by nearly one-third in 2007 and nearly two-thirds in 2008. The Border Patrol indicated that including referrals to all prosecuting authorities is more representative of checkpoint performance because prosecutions in general are a deterrent to crime. Department of Justice (DOJ) officials agreed, noting that there are a variety of cases generated at checkpoints which are referred to state and local law enforcement agencies and prosecutors. For example, due to the volume of cases and limited resources, many U.S. Attorneys’ Offices have “intake” or “prosecution thresholds” by which narcotics cases below certain quantities are routinely referred to state authorities for arrest and prosecution, according to DOJ officials. In addition, there are other state offenses, such as individuals arrested on outstanding warrants, stolen vehicles or merchandise, or some weapons violations, that are also intercepted at Border Patrol checkpoints. DOJ officials stated that a measurement that did not include these types of cases referred to state authorities would miss a substantial number of criminal cases which were generated by the checkpoints and thus neglect a valuable indicator of their effectiveness. For these reasons, Border Patrol plans to revise the performance measure definition for future PARs to include referrals to any prosecuting authority. In addition to these reporting issues, data collection issues across Border Patrol checkpoints also contributed to inconsistent data reported in the Performance and Accountability Report. Standards for Internal Control in the Federal Government call for pertinent information to be recorded and communicated to management in a form and within a time frame that enables them to carry out internal control and other responsibilities. This includes the accurate recording and reporting of data necessary to demonstrate agency operations. To implement this requirement, the Border Patrol developed a checkpoint activity report (CAR) in 2006 as a means for field agents to report daily summaries of checkpoint performance, and provided relevant guidance. Supervisory agents at each station and sector had oversight responsibility for ensuring that data entry complied with agency guidance, and headquarters officials had responsibility for conducting a final review and reliability check. Information we collected from stations responsible for checkpoint data entry showed that data collection practices were inconsistent and incomplete for the apprehension and referral measures included in the PAR. We provided a data collection instrument to the Border Patrol seeking information on how checkpoint agents input data into the CAR for data fields related to apprehensions and seizures at and around checkpoints. Border Patrol headquarters officials forwarded this data collection instrument to stations responsible for operating checkpoints along the southwest border. The responses we received from stations responsible for 60 checkpoints operating along the southwest border in fiscal year 2008 showed inconsistencies in data reporting. Apprehension measure. Officials responsible for data entry at two checkpoints in the Rio Grande Valley sector did not follow guidance in recording apprehensions at the checkpoint. CAR guidance defines “at checkpoint” as an apprehension or seizure that occurs within the pre- primary, primary, or secondary inspection area of the checkpoint. Instead, officials at these two checkpoints attributed all apprehensions within a 2.5-mile radius to the checkpoint, overstating actual checkpoint apprehensions. Officials said they instituted this practice in August 2008 because it more accurately represented checkpoint performance in forcing illegal activity to use longer circumvention routes to get around the checkpoint. However, the CAR contains other data fields to capture apprehensions on checkpoint circumvention routes, and results are reflected in a separate performance measure. Referral measure. Officials responsible for 26 checkpoints reported that they did not regularly or accurately enter data for the number of checkpoint apprehensions referred to a U.S. Attorney, understating checkpoint performance in apprehending criminals who may pose a threat to public safety. In some cases, Border Patrol sector officials said this occurred because at the end of the day when checkpoint data are submitted, supervisors did not know if cases will be referred, and the CAR may not have been updated to reflect any subsequent referrals. Border Patrol headquarters officials said that they were unaware of these data inconsistencies, and that headquarters officials had generally provided limited oversight of checkpoint performance data, relying instead on checkpoint and sector officials to ensure data reliability. According to the Standards for Internal Control in the Federal Government, activities need to be established to monitor performance measures and indicators. Such controls should be aimed at validating the propriety and integrity of performance measures and indicators. Establishing controls for headquarters oversight of checkpoint performance data could provide the Border Patrol with additional assurance related to the accuracy, consistency, and completeness of its checkpoint performance data used to report on the checkpoint performance measures in the annual PAR. Border Patrol officials said that they have formed a workgroup to examine these data integrity issues with respect to checkpoint activity reporting, and would take action to address the identified issues. For example, regarding the referral measure, Border Patrol headquarters officials said that they plan to modify the CAR so that information, such as a referral to a U.S. Attorney, will be extracted from the databases that agents use to process the aliens administratively and criminally. Because the data are to be extracted from these systems, agents should no longer have to enter the information in two places and errors should be eliminated in checkpoint reporting. In addition to the measures used for public reporting in the annual PAR, the Border Patrol identified other measures for checkpoints that taken together can provide indicators of performance for internal management of the program (see appendix II). According to the Senate report accompanying GPRA, performance indicators should, wherever possible, include those that correlate the level of program activity with program costs, such as costs per unit of result or output. The Border Patrol checkpoint performance working group established 21 performance indicators of checkpoint operations that were divided into four main groups, including indicators of program costs in terms of operations and maintenance and man-hours: At the checkpoint. These eight measures examine the extent that checkpoint resources are operational and effective. They include the percentage of time checkpoints are operational or closed for various reasons; number of seizures or apprehensions due to canine detection, sensors, or other technology; number of smuggling events using a method of concealment; number of aliens per smuggling load; and cost effectiveness of checkpoints considering operations and maintenance costs. Immediate impact areas. These six measures compare checkpoint apprehensions and seizures to those on checkpoint circumvention routes, in geographic areas adjacent to the checkpoint, and at transportation centers (i.e., bus terminals, train stations) and staging areas (such as stash houses). At the border. These three measures compare checkpoint operations to other Border Patrol enforcement operations. Two of these three measures—a comparison of checkpoint apprehensions and drug seizures to all apprehensions and seizures—were used as GPRA reporting measures in the annual PAR. The third measure related to cost effectiveness in terms of comparing man-hours dedicated to checkpoint operations to man-hours dedicated to other enforcement activities. Quality of life. These four measures examine how checkpoint operations help address major crime across communities and assist other federal, state, local and tribal agencies. One of these four measures—referral of smugglers for prosecution to a U.S. Attorney— was included as a GPRA reporting measure in the annual PAR. The remaining three measures examined the reduction of major crimes in areas affected by checkpoint operations, the number of cases referred to other agencies identified by checkpoint operations, and the number of apprehensions turned over to the Border Patrol by other agencies during times the checkpoint is operational or non-operational. Inconsistent data entry practices by field agents preclude using many of the measures as indicators of performance or cost effectiveness. Responses received from station officials responsible for operating 60 checkpoints on our data collection instrument showed that data reported in the CAR were often incomplete, inconsistent across stations, or missing altogether. These officials reported that checkpoint data entry issues were caused by unclear definitions in checkpoint performance data guidance, differences between data fields and operations, and perceived duplication of effort for information available in E-3, which is the primary information system used by CBP for tracking all enforcement activities conducted by its components. Unclear definitions in guidance. Data entry personnel differed in how they interpreted guidance related to checkpoint data fields, resulting in inconsistent data reporting across checkpoints and across different shifts at individual checkpoints. Attributes of successful performance measures include that the measure is clearly stated, the name and definition are consistent with the methodology used to calculate it, and the measure produces the same result under similar conditions. In reporting the number of apprehensions or seizures on circumvention routes, however, officials at one checkpoint we visited considered all activity within the station’s area of responsibility to be circumventions, while officials at other checkpoints considered only the activity on defined circumvention routes. Border Patrol guidance for the CAR defined circumventions as “to avoid, or get around by artful maneuvering,” but did not specify how this definition should be applied by checkpoint officials. One Border Patrol field official said that at one location, supervisors used different definitions for entering information in the same data fields because of unclear definitions in CAR guidance, resulting in inconsistencies in data entry. Specifically, this Border Patrol field official noted that there was confusion among agents responsible for inputting data into fields related to concealment methods and cases turned over to other agencies, because neither field is defined in the CAR guidance. Officials responsible for 16 of 47 checkpoints responding to an open-ended question reported that agents need additional instruction, training, or clearer guidance in using the CAR. Differences between data fields and operations. Some data fields in the CAR are inconsistent with operations, resulting in an understatement of some activities, including indicators for one of the cost effectiveness measures. For example, checkpoint officials are required to track the number of agents staffed per shift in the CAR, but at least 20 permanent checkpoints operate using an overlapping four- shift schedule, while the CAR provides for a three-shift format. As a result, agent hours may be understated at the majority of permanent checkpoints along the southwest border because checkpoint officials could not record all of the hours worked in a four-shift schedule. Duplication with other information systems. Field agents considered CAR data entry time consuming and somewhat duplicative of other information systems. Manual efforts by field agents to go through all arrest reports daily to identify those that are pertinent to checkpoints for summary in the CAR can be a labor-intensive effort. Detailed information on the arrest or activity summarized in the CAR is already reported in E-3, which tracks enforcement efforts from the initial arrest to final disposition. Officials responsible for 15 of 47 checkpoints responding to an open-ended question in our data collection instrument recommended that reporting requirements among information systems should be integrated to reduce duplication of effort. Overall, Border Patrol officials said that they were unaware of the extent of these data entry and reporting issues, and that headquarters officials had generally provided limited oversight of checkpoint performance data, relying instead on checkpoint and sector officials to ensure data reliability. Internal control standards require that agencies monitor their activities, through management and supervisory personnel, to assess the quality of performance over time. Consistent with these standards, we have previously reported that an agency’s management should have a strategy to ensure that ongoing monitoring is effective and will trigger separate evaluations where problems are identified or systems are critical to measuring performance. Border Patrol headquarters officials stated that the workgroup formed to address data integrity issues would take steps to address these identified data entry issues, but officials did not identify how they would ensure proper oversight of checkpoint data collection. Specifically, to address unclear definitions in the CAR, Border Patrol officials reported that they plan to provide updated directives to field staff regarding definitions, and would provide associated guidance regarding data input in the CAR. To address differences between data fields and operations, Border Patrol officials said they would update the CAR to reflect the current operation of checkpoints. Border Patrol officials noted that the time frames for completing these actions are unknown at this point because guidance and systems need to be developed and then approved by Border Patrol leadership. Until the Border Patrol fully addresses these data entry and oversight issues, it will not be able to ensure that data inputted into the CAR accurately reflects checkpoint operations. Finally, in regard to system duplication, Border Patrol officials stated that the recent rollout of E-3 does provide the means to report some performance data for checkpoints that are common to all components, such as seizures and apprehensions, but that the CAR is still necessary to track data for some performance indicators that are unique to checkpoints, such as hours checkpoints are in operation and staff assigned to operate those checkpoints. Other data limitations preclude the Border Patrol from implementing a measure comparing the cost effectiveness of checkpoint operations with other Border Patrol strategies, such as line watch and roving patrol operations. We previously recommended that the Border Patrol implement such a measure to determine whether it was efficiently utilizing resources among checkpoints and among its three-tiered border enforcement strategy, and to assist in allocating additional resources within sectors or between sectors so that those resources would have the greatest impact. While the GPRA measures do compare checkpoint apprehensions and seizures to other Border Patrol activities, the Border Patrol indicated that data are not available on the number of agents staffed to line watch and roving patrol operations. Without accurate data on the number of agents staffed to line watch and roving patrol operations, it will not be possible to compare the cost effectiveness of checkpoints with these other Border Patrol activities. According to Border Patrol officials, the agency discontinued tracking agent hours by assignment in 2004, when it became cost prohibitive to maintain the information system capturing these data, and a comparable system to the CAR was not implemented for operations other than checkpoints. Officials stated that they plan to address this limitation by developing a new data system to track agent hours and assignments for border enforcement operations. The Border Patrol plans to initially deploy this new data system by the end of fiscal year 2009, and add updates as needed to accurately track agent hours by assignment. Among other factors, the Border Patrol considered community safety and convenience in recent checkpoint placement and design decisions, in accordance with Border Patrol guidelines and requirements of other federal, state, and local agencies. The placement and design process was completed for three new permanent checkpoints since 2006, and no public comments were received about their design or placement in fairly remote areas of Texas. Some members of the public have raised concerns about the placement and size of a proposed permanent checkpoint for I-19 in Arizona, which is to be located closer to nearby communities. Draft plans we reviewed for the I-19 checkpoint were consistent with Border Patrol guidelines to locate checkpoints in less populated areas away from schools and hospitals and also considered current and future traffic volumes in accordance with Department of Transportation goals to facilitate highway travel and reduce congestion. The Border Patrol finalized three placement decisions for new permanent checkpoints in the last 3 years in accordance with its Design Guide and policy documents. These checkpoints, all located in Texas, were placed on I-35, U.S. Route 83, and U.S. Route 62/180. In regard to checkpoint location, Border Patrol guidance includes factors intended to maximize operational effectiveness and minimize adverse impact on the public and surrounding communities. Specifically, the guidance states that to provide strategic advantage, checkpoints should be placed in locations that provide good visibility of the surrounding area, near the confluence of two or more significant roads leading away from the border, and have minimal routes that could be used by illegal aliens to circumvent the checkpoint. The guidelines discuss community impact in terms of public safety issues and traffic considerations. Specifically, preferred checkpoint locations are at least a half mile from businesses, residences, schools and hospitals, or other inhabited locations. In addition, the Border Patrol guidelines suggest that checkpoints be located on a stretch of highway providing sufficient visibility for traffic compatible with safe operations, for both the traveling public, as well as agents working at the checkpoint. We mapped the locations of the three permanent checkpoints placed by the Border Patrol since 2006 along with relevant population data, schools, and hospitals, and the results were consistent with Border Patrol guidance. Specifically, the mapping analysis results, shown in table 2, indicated that the three checkpoints were located in sparsely populated areas and at least 9 miles from the nearest hospital or school. Border Patrol placement decisions for these checkpoints also passed through federal, state, and local government review, as well as public review during the environmental assessment process. Our review of documentation showed that the Border Patrol conducted environmental assessments for the three checkpoint locations that included potential community impacts due to noise, air quality, and water resources, as well as potential socioeconomic impacts on local income, housing or businesses, child protection, and increased traffic congestion. The results of the assessments were documented along with relevant correspondence with federal, state, and local agencies showing compliance with relevant laws and requirements. Results of the environmental assessment conducted for the three checkpoints showed no adverse impact on communities that would require an environmental impact statement, and no public comments were received. The placement process for a proposed checkpoint on I-19 in Arizona has not yet reached the stage of soliciting formal public comment, but some citizens living in nearby communities have expressed concerns about its proposed location south of Tucson at KP 41. While some citizens expressed support for the checkpoint, others noted that the checkpoint would negatively impact local communities, and should be located elsewhere, or removed altogether. Community members with this latter view stated that the Border Patrol should devote checkpoint resources to deter illegal entry at the border. Tucson sector officials said they chose KP 41 as the best site for a permanent checkpoint on I-19 among three other locations: KP 42 (the location of the current tactical checkpoint), KP 25, and KP 50. According to Tucson sector officials, while the KP 50 site provided certain strategic advantages, the KP 41 site was selected because it was furthest from populated areas while also providing strategic advantage. Officials also noted that when determining the checkpoint’s location, they consulted with developers regarding expected population growth and plans for development along the I-19 corridor, but officials stated that it is difficult to know what development will or will not take place in the future, as plans can change. According to officials, these discussions indicated that development was expected along I-19, but more densely around the KP 25 and KP 50 sites than the KP 41 site. In addition, officials from the Arizona Department of Transportation said that the KP 41 location would likely meet state requirements for highway traffic safety, but could not make a final determination until the final plans were submitted for review and approval. We mapped the four proposed locations for the I-19 checkpoint along with relevant population data, schools and hospitals, and the results were consistent with Border Patrol guidance, as shown in table 3. For example, the data showed that the KP 41 and KP 42 sites were in areas with fewer people than the other two locations. We also reviewed county planning documents and zoning maps to determine how the proposed checkpoint locations compared with plans for future development. These documents showed that areas around KP 41 were zoned for lower density population than the KP 25 and KP 50 proposed checkpoint locations. Our mapping analysis also showed that the KP 41 and KP 42 sites were farther away from schools than the other locations, as shown in figure 11. Proximity to the Rio Rico high school was a reason cited by the Border Patrol for not choosing the KP 25 location. We also traveled to the four proposed locations on I-19 with Border Patrol officials who showed us differences among the sites and factors they considered in choosing KP 41, including proximity to populated areas, tactical advantage, and costs of construction. (See table 4.) Officials noted that while the KP 41 site had certain disadvantages, such as the highway access road parallel to the interstate (known as a frontage road) and the proximity to the community of Tubac, they pointed out that KP 41 was furthest from populated areas, and was the only site that did not have outlying roads near the interstate that would allow illegal aliens to circumvent the checkpoint. We also observed that the terrain around KP 41 was relatively flat, which Border Patrol officials explained would allow for surveillance of the surrounding area. In contrast, the KP 25 location was near both elevated areas and canyons where Border Patrol officials said it would be more difficult to identify and apprehend illegal activity around the checkpoint. With respect to the KP 42 site, Border Patrol officials stated that substantial amount of earthwork would be needed to level the land, which would increase the construction costs. (See appendix III for photographs of the various sites.) We also traveled along I-19 from the U.S. border at Nogales to the city of Tucson and Border Patrol officials showed us why other sites would not be suitable alternatives for a checkpoint location. Border Patrol officials stated that areas south of KP 25 are considered too close to the border to provide strategic value, a factor listed in Border Patrol guidance. Areas between KP 25 and KP 41, between KP 42 and KP 50, and north of KP 50 were not considered suitable for a checkpoint for reasons including topography, proximity to communities, availability of circumvention routes, or highway characteristics—such as curves in the road—that were not compatible with safe operations. The Border Patrol’s three permanent checkpoints constructed since 2006 were generally designed in accordance with its checkpoint design guidelines. Factors of consideration included in the design guidelines related to operational effectiveness, the safety and comfort of agents and canines working the checkpoint, the safety and convenience of the public traveling through the checkpoint as well as detainees held at the checkpoint, and aesthetics for blending checkpoint architecture with the surrounding community. According to CBP facilities management officials, checkpoint size is largely determined by the number of inspection lanes at the checkpoint, and primary and secondary inspection areas account for the majority of a checkpoint’s size. CBP officials stated that checkpoint buildings, such as the main building housing administration and detention, generally account for a relatively small percentage of the checkpoint size. Regarding inspection lane criteria, checkpoint design guidelines recommend sufficient capacity to quickly and safely move traffic through the checkpoint. Specifically, the design should consider current and projected traffic volume traveling through the checkpoint, as well as the preference to locate inspection lanes off-highway, consistent with national and state initiatives to reduce traffic congestion and improve highway safety. The guidelines also recommend a minimum of two primary inspection lanes to separate commercial and passenger vehicles, and a canopy to cover all inspection areas. We reviewed the inspection lanes for the three new permanent checkpoints—which were all located in Texas—and results were partially consistent with checkpoint design guidance. In accordance with checkpoint design guidelines, the design for all three checkpoints included off-highway inspection lanes that separated commercial and passenger traffic, canopy covers protecting agents and the public, and at least the minimum number of primary inspection lanes. However, we could not determine if the Border Patrol complied with its checkpoint design guidelines to consider current and future traffic volumes when determining the number of inspection lanes at each checkpoint, because it did not conduct traffic studies when designing the three checkpoints. Although not explicitly required, senior CBP and Border Patrol facilities officials stated that the number of inspection lanes at a checkpoint should be based to a large extent on current and projected traffic volume over the next 20 years to ensure that checkpoint capacity will be sufficient in the near future, and this should be documented in a traffic study. Traffic design engineering principles discuss the importance of considering current and expected traffic volumes over a given period when designing a project, to ensure sufficient capacity. According to CBP facilities officials, however, traffic studies were not conducted for the U.S. Route 62/180 checkpoint or the U.S. Route 83 checkpoint, and officials said they have no record of a traffic study being conducted for the I-35 checkpoint. Officials stated that traffic studies may not have been conducted because it is not an explicit requirement in checkpoint design guidelines, but agreed that they should have been done to inform decisions regarding checkpoint design and the number of inspection lanes. In the absence of documented traffic studies, the Border Patrol cannot determine if the number of inspection lanes at each of these checkpoints is consistent with current and projected traffic volumes, or if a different number of lanes would have been more appropriate. To provide some information on traffic volumes for these three checkpoints, we obtained available data on 2007 traffic volumes for areas near the location of each of the three checkpoints from the Texas Department of Transportation. As shown in table 5, the relative number of inspection lanes at each checkpoint appears consistent with 2007 traffic volumes, in that the I-35 checkpoint has a higher traffic volume and more inspection lanes than the other two checkpoints. Regarding criteria for facilities and other resources, Border Patrol design guidance lists the buildings and features that are recommended for inclusion at new permanent checkpoints. According to Border Patrol officials, this listing of facilities and resources was based on existing checkpoint design, as well as the professional judgment of Border Patrol officials regarding the facilities and resources that enhance checkpoint operations, and should be adjusted to the circumstances of each checkpoint to maximize checkpoint effectiveness and efficiency and also facilitate the safety and convenience of agents, the public, and detainees. For example, design guidance provides for detention facilities at checkpoints to reduce the amount of time agents have to leave the checkpoint to transport illegal aliens to other locations, and also provides separate areas for men, women, and children who are detained to facilitate their safety. We reviewed Border Patrol design documents for the three Texas checkpoints and results showed that two of the three checkpoints had all but one of the recommended resources; however, one checkpoint did not have several resources, as shown in table 6. The one resource not included at the new I-35 checkpoint in the Laredo sector and the new U.S. Route 62/180 checkpoint in the El Paso sector was commercial truck scales, which can improve checkpoint operations by giving agents another tool for detecting contraband. According to Border Patrol officials, truck scales allow agents to compare the weight of cargo on the truck’s manifest to the current weight of cargo at the checkpoint. A disparity between the two measurements could indicate that the amount or type of cargo has changed. The U.S. Route 83 checkpoint was also lacking many other recommended resources, such as canine facilities, due to space constraints at the site, according to sector officials. Officials stated that there was limited space to accommodate all of the resources, because the land is not owned by the Border Patrol but provided through a multiuse agreement between DHS and the Texas Department of Transportation. These officials added that additional funding would be needed to expand the checkpoint site to accommodate these resources. However, sector officials stated that the resources currently available at the checkpoint are sufficient for basic operations, considering the relatively low volume of traffic at the checkpoint. Border Patrol guidelines also include criteria to use aesthetics in the architecture and design of checkpoints. These criteria state that checkpoints should be designed in a manner that complements the indigenous architecture of the surrounding area, including building scale and proportion. The environmental assessments for the three Texas checkpoints showed no significant aesthetic impact because of the remote locations of the checkpoints and lack of community concern over the design of existing checkpoints. No public comments were received during the 30-day comment period raising concerns about the lack of aesthetics in the three checkpoints’ final designs. The design process for the proposed permanent checkpoint on I-19 in Arizona has not yet been completed as of July 2009, but some citizens living in nearby communities have expressed concerns about its potential size and appearance. Border Patrol officials stated that in general, the I-19 and other new permanent checkpoints are to be larger than existing checkpoints because many of the latter are outdated and undersized to address current traffic volume and changes in operation. As these older checkpoints are replaced, the Border Patrol plans to enlarge and redesign them to reflect new technology and to incorporate lessons learned from experiences with more recently built checkpoints, according to officials. CBP and Border Patrol officials stated that plans for the permanent I-19 checkpoint are based on the recently constructed I-35 checkpoint near Laredo, which they identified as a model checkpoint in terms of layout, resources, and size. (See figure 12.) Tucson sector officials said that the I- 19 checkpoint design also incorporated lessons learned from the I-35 checkpoint design. For example, officials stated that the design of the I-35 checkpoint was found to be too small and had to be expanded to accommodate a VACIS unit, and that operations at the I-35 checkpoint showed that more space was needed in the inspection areas for safe truck maneuvering. One key difference between the I-19 checkpoint design and that of the three new checkpoints in Texas is that the Border Patrol plans to incorporate aesthetics into the I-19 checkpoint design, in response to community concerns. Some community members who visited the I-35 checkpoint were concerned that the I-19 checkpoint would disrupt the beauty of the local landscape in that it would be too large and visually unappealing. Although not reflected in the current draft design, Border Patrol officials said the final design issued for public comment would reflect input from the community on options for blending the checkpoint in with the surrounding community and landscape. Border Patrol officials from the Tucson sector and the community have coordinated on other aspects of the I-19 checkpoint design. Tucson sector officials have met with community members at least 45 times from 2006 to 2009 to address community questions or concerns. In addition, a community workgroup was established in April 2007 to allow direct community involvement in discussions about the proposed permanent checkpoint. In June 2007, this workgroup split into two subcommittees. One subcommittee issued a report to the Border Patrol with recommendations to reduce the impact of the checkpoint on surrounding communities and to improve its effectiveness and public convenience. The other subcommittee issued a report expressing opposition to a permanent checkpoint on I-19, recommending that resources be placed on the border instead. We met with Border Patrol officials and reviewed documents showing how the Border Patrol has modified the design of the checkpoint in response to community input. To address concerns about the size of the checkpoint, for example, Border Patrol officials said they removed certain structures from the design plans, such as a station house, helipad, and fueling island. In addition, to ensure checkpoint lighting did not adversely impact a local observatory, officials stated that they plan to comply with the local dark sky ordinance by covering checkpoint lighting with a canopy, among other things. Border Patrol officials stated that other recommendations made by the workgroup to increase the safety and convenience for travelers through the checkpoint—such as clearly posted signage—will be included in the checkpoint design, as shown in table 7. Our review of the draft plans for the I-19 permanent checkpoint showed that it is planned to surpass the I-35 checkpoint as the largest checkpoint on the southwest border in terms of total acreage and acreage used for checkpoint operations, including primary and secondary inspection lanes, as shown in table 8. Overall, the I-19 checkpoint is about 20 percent larger than the I-35 checkpoint in terms of total acreage and about 69 percent larger in terms of the acreage to be used for checkpoint operations. Border Patrol officials estimate that 11 of the 18 total acres at the I-19 checkpoint site are not planned to be dedicated to checkpoint operations, but are expected to be used for graded slope area (4.0 acres), storm water retention areas and septic water filtration areas (3.5 acres), and freeway on and off ramps (3.7 acres), which is a requirement from the Arizona Department of Transportation. According to the CBP project manager for the I-19 checkpoint, the large size of the checkpoint is largely due to the number of inspection lanes that are planned to meet current and future traffic volume, per design guidelines. The guidelines indicate that a sufficient number of primary and secondary inspection lanes are needed to ensure that current traffic volume can be processed through the checkpoint with minimal traffic backups and vehicle wait times, as longer wait times create safety concerns and inconvenience the traveling public. When traffic backups reach a certain distance from the checkpoint, sector officials said that they allow traffic to pass through the checkpoint uninspected, which decreases checkpoint effectiveness. Smugglers and illegal aliens use these opportunities to pass through the checkpoint undetected, according to sector officials. Of the eight primary inspection lanes included in the draft design plan for the I-19 permanent checkpoint, five lanes are required to address current traffic volume, according to sector officials. The lanes for processing the current traffic volume include two lanes for commercial traffic and three lanes for passenger traffic. The design is consistent with guidance and the community workgroup recommendations to include off-highway inspection lanes that separate commercial and passenger vehicles, dedicated truck and bus lanes, and canopy coverage for all inspection areas. The remaining three primary inspection lanes in the I-19 checkpoint design plan are to ensure sufficient capacity for processing future traffic volume. Border Patrol budget documents state that the checkpoint construction process alone is estimated to take 5 years, and design guidelines recommend that construction projects consider capacity needs over the next 10 years, which can reduce overall construction costs and maintain longer periods of operational efficiency. The Arizona Department of Transportation projects that traffic on the I-19 corridor will increase by 23 percent from 2007 to 2017, and 35 percent from 2007 to 2027. Using traffic projections for the year 2017, the site engineer for the proposed I-19 checkpoint estimated that the five lanes for passenger vehicles will result in wait times averaging less than 2 minutes, except for three one-hour periods per day when wait times may increase to 8 to 10 minutes. According to the engineer, if the number of passenger lanes is reduced to four, for example, then wait times are estimated to exceed 20 minutes three times per day during peak traffic periods, which would require suspension of inspection activities and which is unacceptable, according to the Border Patrol. Border Patrol officials stated that six of the eight lanes will be able to convert between screening passenger vehicles and commercial traffic, which will give the I-19 checkpoint flexibility during operation to adapt to changing traffic patterns. In regard to the secondary inspection lanes, the proposed nine lanes were found to be insufficient to meet the Border Patrol’s targeted rates of inspection, according to reports by an engineering firm commissioned to provide an advisory review for the I-19 checkpoint design. The engineer reported that to meet target inspection rates during peak periods, the Border Patrol would need to increase the number of secondary lanes for non-commercial traffic from 7 to 22 lanes. Tucson sector officials said that they will not build the additional secondary lanes because they do not have the resources and staff to use them at this time. As a result, the number of referrals of non-commercial traffic from primary to secondary inspection will be decreased as needed to preclude traffic congestion. Plans for the size of the I-19 checkpoint facilities are also consistent with relevant guidelines. Space allocation guidelines are based on many factors, including a functional evaluation of individual space, group consensus of Border Patrol staff, comparison to existing structures, and use of standard formulae. Border Patrol checkpoint design guidelines include general processes for determining the size of these resources or the space required—such as how large the main checkpoint building should be—but do not impose a one-size-fits-all approach on checkpoints. As a result, the sizes of each of these areas may vary at different checkpoints based on the unique circumstances and operational needs of each checkpoint. For example, the size of the main checkpoint building, which includes administration, processing, and detention facilities, is larger at the planned I-19 checkpoint than the I-35 checkpoint by approximately 3,400 square feet, reflecting a greater estimated need at the I-19 checkpoint for processing and detention of illegal aliens. Sector officials stated that having sufficient processing and detention capability at the I-19 checkpoint increases operational efficiency and effectiveness, as agents will no longer have to frequently transport apprehended individuals to the Tucson or Nogales stations for processing and detention. In comparison, the canine kennel building at the I-35 checkpoint is nearly 2,900 square feet larger than the planned kennel at the I-19 checkpoint. According to CBP data, the canine kennel building at the I-35 checkpoint is approximately 3,200 square feet, while the I-19 checkpoint kennel is planned for approximately 290 square feet. Laredo sector officials said that the I-35 checkpoint kennel was large because the building includes an office, storage room, bathing room for the canines, bathroom, mechanical room, and a quarantine area. Tucson sector officials stated that the smaller size is because the I-19 checkpoint kennel will be only used as a rest area for the canines. Plans for the types of resources to be placed at the I-19 checkpoint for conducting effective operations are also consistent with relevant guidelines. For example, at the I-19 checkpoint, the Border Patrol plans to include canine facilities, non-intrusive inspection technology, vehicle lifts, and loading docks, among other resources, as shown in figure 13. Community members living near checkpoints we visited across the four southwest border states told us they generally supported checkpoints operating near them because of the law enforcement presence they provide, but remained most concerned about the property damage that occurs when illegal aliens trespass on private property to avoid the checkpoints. Border Patrol policy highlights the need to detect and respond to this circumvention activity; however, officials stated that other priorities sometimes precluded positioning more than a minimum number of agents and resources on checkpoint circumvention routes. Tucson sector officials stated that when a permanent checkpoint on I-19 is constructed, it will provide additional technological enhancements to monitor activity in the surrounding areas, but they have not documented the number of agents that would need to be deployed to address this activity. Despite concerns regarding property damage type incidents, community members we spoke with generally said that checkpoint operations had not adversely impacted their communities in terms of violent crime, business, or property values, except for those around the I- 19 checkpoint in Arizona. Although the Border Patrol has identified performance measures that could be used to monitor the quality of life in areas affected by checkpoint operations, these measures have not been implemented. Data were not available to determine any causal relationship between checkpoint operations and community well-being; however, some data were available showing overall trends in real estate values, tourism, and crime without controlling for checkpoint operation or other factors. Members of local governments, state and local law enforcement, business groups, ranchers, and residents responding to our request for input generally supported the Border Patrol and checkpoint operations because of the law enforcement presence they provide, but generally agreed that checkpoint operations cause illegal aliens and smugglers to attempt to circumvent the checkpoint—resulting in adverse impacts to nearby residents and communities, such as private property damage, theft, and littering. These concerns were cited most often by ranchers and residents in areas around checkpoints. Ranchers in Texas, California, and Arizona said that they experienced cut fences that allowed cattle or other livestock to escape; drained water tanks or water wastage from irrigation lines that were left open; theft of water, food, clothing, or vehicles; and trash including plastic water jugs and food containers that are either left on the property as trespassers move through the area, or that washed down rivers or streams from other areas. Local law enforcement officials near two checkpoints in Texas we visited said that they frequently respond to calls from ranchers for these reasons, and ranchers said that these impacts have increased their ranch security expenses. The level of concern was lower in areas where checkpoints operated infrequently. For example, the San Diego sector’s checkpoints on I-5 and I-15 are rarely operational, resulting in little need for circumvention and fewer concerns expressed by community members. The greatest level of concern about trespassing and property damage was expressed in the Tucson sector, which has experienced higher levels of illegal alien apprehensions across the southwest border. In fiscal year 2008, for example, just under half of the 705,000 total Border Patrol apprehensions of illegal aliens across the southwest border occurred in the Tucson sector, and sector officials cited a high level of interaction with the community in responding to citizen concerns. However, these apprehensions occurred all across the sector, making it difficult to determine the extent that trespassing on private property was due to attempts to circumvent the checkpoint or use of other transit routes. Our review of Border Patrol data for the Tucson sector showed that significantly more illegal aliens were apprehended in the area around the I- 19 checkpoint than at the checkpoint itself, although the reverse was true for drug seizures, as shown in table 9. Specifically, data show that in fiscal year 2008 about 94 percent of apprehensions occurred in the areas surrounding the I-19 checkpoint compared to 27 percent of drug seizures. These data also show that increases in the number of apprehensions and drug seizures were greater in the areas surrounding the I-19 checkpoint than at the checkpoint itself between 2007 and 2008, suggesting that community impact may have also increased. Specifically, from 2007 to 2008 there was a 72 percent increase in the number of apprehensions in the surrounding area, compared to a 7 percent increase at the checkpoint. Data show that the number of drug seizures for these areas increased by 27 percent from 2007 to 2008, while declining by 8 percent at the checkpoint. Data limitations precluded our determining where illegal aliens and smugglers were apprehended in relation to community boundaries, or comparing the extent that apprehension patterns on circumvention routes, or other transit routes, were similar across sectors. Tucson sector Border Patrol officials stated that illegal activity on circumvention routes generally occurs in remote locations, but the Tucson sector has not yet implemented global positioning technology sector-wide, as used by some other sectors, to pinpoint the location of apprehensions and drug seizures. Instead, this information is tracked among geographic grids comprising 7.4 square miles. In addition, while the CAR contains data fields to capture activity on apprehensions made of those attempting to circumvent checkpoints, definitions for these fields were not used consistently across all checkpoints, based on an analysis of checkpoint officials’ responses to our data collection instrument. Border Patrol officials stated that the checkpoint strategy intends to push illegal aliens and smugglers off-highway into rural areas where they can be more easily apprehended, and the extent that these persons attempt to avoid the checkpoint is an indicator that checkpoints are an effective deterrent. Border Patrol officials said that when a new checkpoint is put in place, or an enhancement is made at an existing checkpoint, apprehensions commonly increase, followed by a decrease as smugglers and illegal aliens search for less rigorously defended transit routes that provide a greater chance of success. In terms of the I-19 checkpoint, for example, Border Patrol officials attributed increasing rates of checkpoint circumvention apprehensions to fixing the checkpoint at its permanent location at KP 42 in November 2006. Over time, officials said that the fixed location for the checkpoint resulted in more continuous operation and greater ability to deploy sensors and other resources that enhance checkpoint effectiveness. Border Patrol officials acknowledged that the checkpoint strategy can adversely impact private property owners, and said that sometimes there were not enough agents in place to deter illegal activity or apprehend trespassers in surrounding areas. According to Tucson sector officials, for example, eight agents per shift are assigned to work the checkpoint lanes and two to four agents per shift are generally assigned in proximity to the I-19 checkpoint to address activity in the surrounding areas, but that number varies from shift to shift and depends on the activity levels during a given time of year. Border Patrol policy highlights the need to detect and respond to checkpoint circumvention, stating that it is just as critical to checkpoint effectiveness as the inspection process, and should be addressed with appropriate staff. However, despite the rapid increase in overall staffing numbers on the southwest border, Border Patrol sector managers may have other priorities for staff placement and stations may only staff checkpoints—and circumvention routes—with the minimum required manpower. In the Tucson sector, for example, checkpoints and other interior locations had lower priority for staffing than border locations, especially border towns such as Nogales, which are major transit routes for illegal activity and had experienced higher levels of violent crime. As the Border Patrol has gained better control of these priority areas at the border, planning documents show that emphasis will shift to other areas, including the I-19 checkpoint. Checkpoint guidance also identifies other resources, such as technology, that can assist Border Patrol agents in detecting and responding to circumvention activity, but checkpoints do not always have these resources available on a continuous basis. This guidance states that a combination of resources, including ground sensors and video surveillance cameras, should be used by each sector and station as needed to monitor and address local circumvention activities. According to Border Patrol officials, the placement and use of these resources can depend on the proximity of checkpoints to populated areas, the extent of illegal activity in the area, and the availability of circumvention routes around the checkpoint. However, officials said that checkpoints may have lower priority than other Border Patrol activities to receive new technology, and older equipment may be less reliable and less available for continuous operation, particularly at tactical checkpoints. For example, the four cameras being used at the I-19 checkpoint are not connected to commercial power and are therefore vulnerable to generator and microwave transmitter issues, according to sector officials. We also noted during our visit to the Tucson sector that staff were not available to monitor the remote surveillance cameras, limiting their effectiveness. A sector official stated that these cameras were continuously monitored only when there was a sufficient number of staff operating the checkpoint lanes and back-up patrols. Having these technology resources available—and monitored—on a continuous basis is important because Border Patrol officials said that circumvention routes were more likely to be patrolled in response to a sensor alert or camera indicating that a response is needed to address activity in these areas. Tucson sector officials stated that when a permanent checkpoint on I-19 is constructed, it will include a wider range of sensors and technology improvements, such as SBInet towers, that will provide a better view of the surrounding areas than the towers at the current checkpoint site and that will enhance agents’ ability to monitor the circumvention areas around the checkpoint. However, checkpoint design and planning documents do not include an estimate of the number of agents that would be deployed to address circumvention activity at the new checkpoint. Our prior work on strategic workforce planning stated that staffing decisions, including needs assessments and deployment decisions, should be based on valid and reliable data. Per Border Patrol checkpoint design guidelines, sector officials are expected to determine the number of staff they will need for checkpoint operations, such as inspections and processing, as part of the design process for constructing new checkpoints. For example, the anticipated staffing level for the new permanent I-19 checkpoint would be 39 agents on the peak shift, according to Border Patrol officials. However, the anticipated deployments of these agents are not included in official design or operational documents, and sector officials are not required to conduct a workforce planning needs assessment to determine how to best address impacts on surrounding areas from illegal aliens and smugglers attempting to avoid the checkpoint. Sector officials stated that technology improvements would enable fewer agents to monitor illegal traffic in these areas, and that a sufficient number of agents will be deployed as necessary in response to the level of illegal activity. However, given the limited resources currently deployed to address circumvention activity at the I-19 checkpoint and community concerns regarding the extent of illegal activity in the circumvention areas, conducting a workforce planning needs assessment at the checkpoint design stage could help the Border Patrol ensure that sufficient resources are planned for and deployed at the new checkpoint to address circumvention activity. Citizen reports are also important sources of information alerting Border Patrol agents to illegal aliens and smugglers trespassing on private property, and Border Patrol officials told us they also make efforts to establish relationships with local ranching and community groups. For example, in the Laredo and San Diego sectors, there are a total of 19 agents whose full-time or collateral duties are to regularly coordinate with local ranchers, maintain relationships, and provide the ranchers with a direct point of contact. Border Patrol stations within these sectors can develop their own community relations strategies. In the Rio Grande Valley sector, for example, Falfurrias station officials told us they hold a monthly meeting with local ranchers to discuss any issues or information that should be shared regarding the level of activity and number of incidents on the various circumvention routes. In contrast, the Patrol Agent in Charge of the Kingsville station said he prefers to maintain personal relationships with local area ranchers. The Tucson sector, where officials have cited a high level of community interaction, has a full-time Community Relations Director who participated in more than 45 community meetings from 2006 to 2009 to discuss issues relating to the current and planned I-19 checkpoint. Across other sectors, community relations strategies can include participating in community events and organizations such as fairs, car shows, and reading to children in local schools. Despite concerns regarding property damage type incidents, representatives of local government, state and local law enforcement, business, ranching, and residents responding to our request for input generally stated that checkpoints had no adverse effects on their communities in terms of violent crime rates, business, and real estate values, similar to findings in our 2005 report in which we reported that most local community members we contacted saw traffic checkpoints as beneficial to their communities. In some cases this could be due to the fact that many checkpoints are located in remote areas away from large population centers, or that some checkpoints are operated infrequently. In regard to crime, officials from 12 law enforcement agencies across the four southwest border states told us that checkpoint operations did not cause an increase in local violent crime rates. Furthermore, officials from seven of these law enforcement agencies stated that they believed checkpoints, as well as the presence of Border Patrol agents, provided a deterrent to criminal activity in their communities. For example, officials from the Alamogordo, New Mexico, Department of Public Safety stated that their 2007 crime rates place them with some of the lowest crime rates among similarly sized cities in New Mexico. The Department’s Director believed that this was due, in part, to the presence of the Border Patrol agents at the checkpoints on U.S. Routes 54 and 70, approximately 25 miles south and west of the city of Alamogordo, respectively. In regard to real estate values, an official from the local Economic Development Council in Kingsville, Texas, told us that homes sales and values north of the U.S. Route 77 checkpoint had increased over the years, which he believed was due to the increase in agents purchasing homes in the area. In contrast, some community members living near the I-19 checkpoint in the Tucson sector—which is operated for nearly 24 hours per day and is in the proximity of small communities—expressed concerns that checkpoint operations caused adverse impacts to their communities in terms of increased violent crime, loss of tourism, and reduced real estate values. A 2007 letter from U.S. Representative Gabrielle Giffords to the Border Patrol Chief detailed concerns from residents in her district that smugglers were invading their communities, threatening their homes, and that they had been affected by violence associated with what appeared to be disputes among drug smugglers. Residents from the town of Tubac, Arizona, which is a community close to the I-19 checkpoint location, reported concerns that tourism in their community had declined due to the proximity of the checkpoint. In addition, the president of a local civic association from Tubac told us that the checkpoint had negatively affected home sales and housing values. Border Patrol officials said that they are not yet using performance measures they had developed to examine how checkpoint operations— including checkpoint circumvention activity—impact the quality of life in surrounding communities. The measures—which are consistent with the Border Patrol National Strategy to reduce crime and consequently improve the quality of life and economic vitality in border enforcement areas— examine major crime reduction, smuggler activity in areas affected by checkpoint operations, and coordination with other federal, state, and local law enforcement agencies. (See appendix II for a description of the quality of life measures.) We have previously reported that measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their programs. Our previous work has shown that when evaluating performance, agencies needed to have measures that demonstrated results, covered multiple priorities, provided useful information for decision making, and successfully addressed important and varied aspects of program performance. The Border Patrol has included data fields in the CAR to collect information relevant to some of the quality of life measures, but the Border Patrol has not developed specific guidance for using the data to assess the impact of operations on surrounding areas, and not all sectors and stations consistently enter the data necessary to use the measures. These limitations in guidance and data collection have hindered the ability of the Border Patrol to assess the impact of checkpoints on local communities. For example, one quality of life measure examines the number of apprehensions and seizures turned over to the checkpoint from other agencies, known as agency assists or referrals, when the checkpoint is operational or non-operational. These data can provide information on the extent to which the Border Patrol is able to address illegal activity traveling through communities to circumvent the checkpoint when it is operational. While the Border Patrol does not consistently track agency assists and referrals from local law enforcement agencies in the CAR, data we obtained from two local sheriff’s departments near the I-19 checkpoint in the Tucson sector show that analyzing this information over time may be informative. As shown in figure 14, Arizona’s Santa Cruz County Sheriff’s Department reported a total of 84 assists to other agencies, including the Border Patrol, in District 2 (which contains the I-19 checkpoint) an increase of approximately 8 percent from 2007.North of the I-19 checkpoint, Pima County Sheriff’s Department Green Valley District reported a total of 247 referrals to the Border Patrol in 2008, a decrease of approximately 7 percent from 2007. Analysis of these data by the Border Patrol may show, for example, the extent to which relative fluctuations and differences in agency assists or referrals in and among locations are due to checkpoint operations or other factors, such as Operation Stonegarden, a program providing funding to state and local law enforcement personnel to provide additional coverage on routes of egress from border areas. Sufficient data were not available for us to determine any causal relationship between checkpoint operations and local crime rates, tourism trends, or real estate values in nearby communities. With respect to the I- 19 checkpoint, these data limitations also precluded a comparison of community impacts for the time before and after the checkpoint on I-19 became fixed at the KP 42 location in November 2006. Such a comparison would require a complete set of historical data to develop a baseline understanding, before interpreting factors that can change the baseline. However, there are limited data sets for specific geographic areas around the I-19 checkpoint, with county level data the smallest possible geographic area, in many cases. We conducted a literature search and identified several studies that attempted to link Border Patrol checkpoints or other aspects of border enforcement operations to local crime, business, or real estate values. These studies were also unable to establish a causal link between Border Patrol operations and changes in crime rates or real estate values due to unavailable or incomplete data, or the inability to separate the impact of border operations from many other contributing factors, such as local and national economic factors. In terms of crime data, for example, officials from Santa Cruz and Pima County Sheriff’s Departments said that data are not available in their information systems to identify the number of crimes committed by illegal aliens, or how many crimes occurred on checkpoint circumvention routes. A more detailed discussion on our methodology and limitations to this analysis can be found in appendix I. Despite the limitations in determining any causal relationship between checkpoint operations and crime, tourism, and real estate values in nearby communities, some historical data were available from federal, state, and local agencies that could be used to show overall trends in real estate values, tourism, and crime for some communities near the I-19 checkpoint, relevant counties, and the state, without controlling for checkpoint operations or other factors. As shown in figure 15, the I-19 checkpoint in Arizona is located in the northern part of Santa Cruz County and the county immediately to the north is Pima County. Communities closest to the I-19 checkpoint include Tubac, which is located approximately 4 miles south of the checkpoint in Santa Cruz County, and Green Valley, which is located about 15 miles north of the checkpoint in Pima County. Real estate property values for locations south and north of the I-19 checkpoint have generally been increasing in the years from 2002 through 2008 as measured by the median county tax assessed value, shown in figure 16. The Tubac community had the highest real estate values of the areas we examined, with property values more than three times as high as properties in Santa Cruz County, and more than twice as high as properties in the Green Valley community and Pima County. Data on the median sales price and net assessed value of homes in these areas showed similar results, as shown in appendix IV. Tourism data, as reflected by visitor data reported by Arizona state parks, showed no consistent pattern between the years 2002 through 2008 for parks located near Tubac community (Tubac Presidio State Historic Park), or in other areas of Santa Cruz County (the Patagonia Lake State Park), and Arizona. As shown in figure 17, the number of visitors to these parks generally fluctuated within a 15 percent window from year to year, except for the year between 2006 and 2007, when visitors to Tubac state park decreased by 29 percent, a substantial difference compared to other locations. According to an Arizona State Parks representative, this decline could have been caused by several factors, including a large number of events in 2006 at the Tubac state park to celebrate the park’s 50th anniversary that resulted in more park attendees in 2006, an overall decline in visitors to other parks in Santa Cruz County, and a statewide decline in overall spending and international visitors. All of these parks experienced a decline in visitors the following year ending 2008, ranging from 7 to 10 percent. Similar declines were seen in other tourism data based on lodging statistics for the counties and state of Arizona (see appendix VI). Violent crime data from county sheriff departments showed that the number of homicides, sexual and aggravated assaults, and robberies was substantially lower in the district containing the I-19 checkpoint and the surrounding communities of Tubac, Tumacacori, Carmen, Amado, and Arivaca than other nearby areas, from 2004 through 2008, but has been increasing at a higher rate than nearby areas in the last 2 years as shown in figure 18. Specifically, violent crime in District 2 almost doubled from 8 offenses in 2006 to 15 offenses in 2008. In contrast, violent crime in the Green Valley District north of the I-19 checkpoint has been decreasing since 2006, although the number of offenses remains almost twice as high. Additional information on crime trends for these counties can be found in appendix VII. Crime patterns were similar for property offenses, which include burglary, larceny, auto theft, and arson. As shown in figure 19, District 2 containing the I-19 checkpoint experienced a 38 percent increase in property crimes compared to Green Valley District from 2007 to 2008, although the total number of offenses in 2008 was much lower; 58 versus 534 offenses, respectively. County level changes were also higher for Santa Cruz County compared to Pima County, which had a slight decline. Within the past few years, CBP and the Border Patrol have increased staff, fencing, and other technology at the border to deter repeated illegal border crossings. Despite these investments at the border, however, it would appear that checkpoints will continue to serve a purpose as part of the Border Patrol’s three-tiered strategy. As long as agency goals indicate that the majority of major criminal activity will pass through the ports of entry undetected, checkpoints are uniquely positioned to provide additional opportunities to apprehend illegal aliens and contraband that travel from the ports along U.S. interstates or roads. Since our last report, the Border Patrol has established performance measures indicating checkpoint contributions toward apprehending illegal aliens and seizing illegal drugs, but the lack of information on those passing through checkpoints undetected continues to challenge the Border Patrol’s ability to measure checkpoint effectiveness and provide public accountability. While the Border Patrol has developed other measures in response to our 2005 recommendation that collectively may provide some indication of checkpoint effectiveness and efficiency, these measures cannot be effectively used until field agents accurately and consistently collect and enter performance data into the checkpoint information system. Field agents are unlikely to do so until guidance is improved, and rigorous oversight is implemented at the station, sector, and headquarters levels. The Border Patrol states that it will take action to address these issues. Until these actions are completed, however, the integrity of the CBP performance and accountability system in regard to checkpoint operations is uncertain. We reiterate the need for CBP to act on our prior recommendation to implement a cost-effectiveness measure in order to help encourage action by headquarters and field managers to identify best practices for checkpoint operation, and implement these practices across locations. Similarly, while the Border Patrol’s national strategy cites the importance of assessing the community impact of Border Patrol operations, the implementation of such measures is noticeably lacking. Implementing such measures in areas of community concern may serve to provide greater attention and priority in Border Patrol operational and staffing decisions to address any existing issues. Although the Border Patrol’s checkpoint design process includes factors related to the safety and convenience of travelers, agents, and detainees, the absence of explicit requirements in Border Patrol checkpoint design guidelines and standards to consider current and expected traffic volumes when determining the number of inspection lanes and to conduct traffic studies could result in inconsistencies in the checkpoint design process and the risk that checkpoints may not be appropriately sized. Furthermore, the fact that the checkpoint strategy intends to push illegal aliens and smugglers to areas around checkpoints—which could include nearby communities—underscores the need for the Border Patrol to ensure that it deploys sufficient resources and staff to these areas. Conducting a needs assessment when planning for a new or upgraded checkpoint could help better ensure that officials consider the potential impact of the checkpoint on the community and plan for a sufficient number of agents and resources. To improve the reliability and accountability of checkpoint performance results to the Congress and the public, we recommend that the Commissioner of Customs and Border Protection take the following four actions: Establish milestones for determining the feasibility of a checkpoint performance model that would allow the Border Patrol to compare apprehensions and seizures to the level of illegal activity passing through the checkpoint undetected. Establish internal controls for management oversight of the accuracy, consistency, and completeness of checkpoint performance data. Implement the quality of life measures that have already been identified by the Border Patrol to evaluate the impact that checkpoints have on local communities. Implementing these measures would include identifying appropriate data sources available at the local, state, or federal level, and developing guidance for how data should be collected and used in support of these measures. Use the information generated from the quality of life measures in conjunction with other relevant factors to inform resource allocations and address identified impacts. To ensure that the checkpoint design process results in checkpoints that are sized and resourced to meet operational and community needs, we recommend that the Commissioner of Customs and Border Protection take the following two actions: Require that current and expected traffic volumes be considered by the Border Patrol when determining the number of inspection lanes at new permanent checkpoints, that traffic studies be conducted and documented, and that these requirements be explicitly documented in Border Patrol checkpoint design guidelines and standards. In connection with planning for new or upgraded checkpoints, conduct a workforce planning needs assessment for checkpoint staffing allocations to determine the resources needed to address anticipated levels of illegal activity around the checkpoint. We provided a draft of this report to DHS and DOJ for review and comment. DHS provided written comments on August 24, 2009, which are presented in appendix VIII. In commenting on the draft report, DHS and CBP stated that they agreed with our recommendations and identified actions planned or underway to implement the recommendations. DOJ did not provide formal comments. CBP and DOJ also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Homeland Security, the Commissioner of U.S. Customs and Border Protection, the Attorney General, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any further questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. This report addresses the following four principal questions: How has checkpoint performance contributed to meeting Border Patrol goals for securing the southwest border, and what factors, if any, have affected checkpoint performance? To what extent has the Border Patrol established measures of performance for checkpoints? To what extent has the Border Patrol considered community impacts in the placement and design of checkpoints since 2006, including the planned I-19 permanent checkpoint? How do checkpoint operations impact nearby communities, particularly those near the I-19 checkpoint, and to what extent does the Border Patrol address those impacts? To address our objectives, we examined and analyzed Border Patrol checkpoint policy documents, reports, manuals, and guidance concerning border strategy and checkpoint operations. We interviewed cognizant Border Patrol officials at Washington, D.C. headquarters, officials in sector offices, and personnel at selected permanent and tactical checkpoints. We visited five Border Patrol sectors—San Diego, California; Tucson, Arizona; Laredo, Texas; Rio Grande Valley, Texas; and El Paso, Texas (which also covers all of New Mexico). In total, we visited 12 permanent checkpoints and 3 tactical checkpoints, as shown in table 10. The five sectors we visited were selected to provide a range in the size and types of checkpoint operations; estimated annual volume of illegal aliens; volume of vehicular traffic transiting checkpoints; topography and density of road networks; presence or absence of large urban areas on or near the border, both on the U.S. and Mexican sides; and types of checkpoints (permanent and tactical). As we were told by the Border Patrol in deciding which sectors and checkpoints to visit, and as we found during our site visits, these five sectors contained a wide variety of operating conditions. For example, we observed that traffic volumes varied widely at different checkpoints. Similarly, there were variations in the estimated numbers of illegal aliens entering these sectors over the last several years, and differences in topography, with some being comparatively mountainous and others being comparatively flat. During the winter months, the Laredo and Rio Grande Valley sectors have the Rio Grande as a natural barrier to illegal immigration, while the Tucson sector has a flat desert at the border that is easily crossed. Some sectors have permanent checkpoints, such as at Temecula, California, that must be supplemented with tactical checkpoints, because of substantial secondary road networks around the permanent checkpoint. Others, such as Rio Grande Valley, have no alternative secondary roads available to evade the permanent checkpoints on the limited north-south highways. Some sectors, such as San Diego and Laredo, have large U.S. and Mexican urban areas on or very near the international border, while others, such as Tucson, have only a few much smaller cities on either side at the border. In choosing these sectors, which are located in all four southwest border states (California, Arizona, New Mexico, and Texas), we sought and found a wide range of conditions that appear to reasonably represent the range of operating conditions faced by the Border Patrol across the Southwest. However, we were unable to observe all operating conditions at all times and the conditions we describe are therefore based on available documentation and observations at our site visits only. We also interviewed selected officials in communities near some of the checkpoints, including state and local law enforcement and community officials, selected community leaders, citizens, and owners of local businesses. These included the communities of Temecula, California; Green Valley, Arizona; Nogales, Arizona; Sahaurita, Arizona; Tubac, Arizona; Laredo, Texas; Sarita, Texas; Kingsville, Texas; Falfurrias, Texas; Las Cruces, New Mexico; and Alamogordo, New Mexico. Because these places and persons was a nonprobability sample, the results from our site visits cannot be generalized to other locations, checkpoints, local officials, or citizens, but what we learned from our site visits and the persons we interviewed provided a useful perspective on the issues addressed in this report. However, this report does not address some of the larger issues surrounding illegal immigration into the United States, such as the disparities in average daily wages between Mexico and the United States, and the incentives created by these disparities for illegal immigration, as well as the difficulties of neutralizing such disparities through work site enforcement. We have addressed some of these issues in prior work. In addition, although deterring illegal immigration through the likelihood of detection and apprehension is a goal of the Border Patrol—and checkpoints—we did not attempt to measure the deterrent effect of the Border Patrol’s operations, as this would have required, among other things, opinion surveys of Mexican citizens and potential contraband smugglers. This report also does not address the larger factors related to illegal drugs in the United States, such as the demand for illegal drugs in the United States and the incentives those create, U.S. and Mexican government efforts to address the smuggling of illegal drugs, and the U.S. government anti-drug policies. We conducted this performance audit from July 2008 to August 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform our audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides this reasonable basis for our findings and conclusions based on our audit objectives. To assess the contributions checkpoints make to the Border Patrol’s mission and the factors that affect checkpoint performance, we reviewed Border Patrol policy and guidance regarding checkpoint operations and interviewed officials at Border Patrol headquarters, including the Chief and other senior managers, and officials responsible for operating checkpoints in five of the nine Border Patrol sectors on the southwest border. We obtained data reported in Border Patrol’s checkpoint activity report (CAR) for all checkpoints, permanent and tactical, located in southwest border states. We were limited to data from fiscal years 2007 and 2008 because while the CAR was implemented in July 2006, consistent data for all checkpoints were not available until October 2006—the beginning of fiscal year 2007. To obtain checkpoint apprehensions and seizures by sector, we added apprehensions and seizures that occurred at each sector’s checkpoints for each fiscal year. Of the 71 checkpoints located in the nine southwest border sectors, only two checkpoints in the Rio Grande Valley sector defined apprehensions and seizures at checkpoint in a manner inconsistent with Border Patrol guidance. These two checkpoints count all apprehensions and seizures occurring within 2.5 miles of the checkpoint as occurring “at checkpoint,” as of August 2008. Prior to August 2008, these two checkpoints used the same definition as other checkpoints—that an apprehension or seizure at a checkpoint occurs “at the immediate checkpoint.” Nevertheless, we believe these checkpoint data to be sufficiently reliable for reporting purposes, with limitations noted, based on the steps we describe in the next section. We also obtained data from the Border Patrol on total apprehensions and drug seizures across each of the nine southwest border sectors to compare the relative contributions of each sector’s checkpoints to overall apprehensions and drug seizures on the southwest border. In addition, we obtained data from the CAR on the number of aliens from special interest countries encountered at checkpoints in fiscal years 2007 and 2008, and obtained information from U.S. Customs and Border Protection (CBP) and Border Patrol officials regarding how those encounters are managed and documented. We reviewed Border Patrol guidance and interviewed officials responsible for checkpoint operations in five Border Patrol sectors regarding factors that influence checkpoint performance. We also interviewed Drug Enforcement Administration and selected local law enforcement officials located near checkpoints in five Border Patrol sectors to determine the extent to which Border Patrol checkpoints support or impact their respective law enforcement operations. To assess Border Patrol’s checkpoint performance measures, we reviewed documents from Border Patrol and CBP, including a document identifying various checkpoint performance measures developed by Border Patrol, CBP’s annual Performance and Accountability Reports (PAR) for fiscal years 2006 through 2008, and DHS’s annual performance reports for fiscal years 2007 through 2010. We also reviewed our prior report on checkpoints, which found that Border Patrol had not established adequate performance measures for checkpoints. We met with Border Patrol headquarters officials responsible for developing and implementing checkpoint performance measures to discuss the measures and how they are used by Border Patrol management. We also met with officials at the Border Patrol sectors we visited to discuss the checkpoint performance measures. In addition, we compared Border Patrol’s performance measures and data collection practices with the Government Performance and Results Act of 1993 (GPRA) and GAO’s Standards for Internal Control in the Federal Government. To assess the reliability of checkpoint performance data and to determine how checkpoint supervisors input information into the CAR, we sent a data collection instrument to Border Patrol officials, who provided it to all Border Patrol stations along the southwest border responsible for operating checkpoints. The CAR is the primary data collection system for checkpoint performance data. We received responses from 60 checkpoints. We determined, based on these responses, our own observations of checkpoint data entry at some checkpoints, and a review of Border Patrol provided data, that data on “at checkpoint” apprehensions and seizures were sufficiently reliable for reporting purposes, but other data fields were not consistently collected and therefore not reliable for our reporting purposes. Based on the results of the data collection instrument, we identified various factors that contribute to checkpoint data reliability issues. We also interviewed Border Patrol headquarters officials and officials at the five sectors we visited in the field about data integrity procedures, including methods by which data are checked and reviewed for accuracy. We also reviewed documents to determine what guidance is provided for collecting and reporting checkpoint performance data, and what steps could be taken to address identified data problems. To assess Border Patrol’s reporting of checkpoint performance measures in the annual CBP PAR, we compared the reported results with our own calculations of checkpoint performance data. These checkpoint performance measures reported in the PAR are (1) apprehensions at checkpoints as a percentage of total Border Patrol apprehensions, (2) drug seizures at checkpoints as a percentage of total Border Patrol drug seizures, and (3) percentage of checkpoint cases referred to a U.S. Attorney. For the first two measures, we used data from the CAR to calculate the total number of checkpoint apprehensions and checkpoint drug seizures, and divided that result by total apprehensions and drug seizures in Border Patrol’s nine southwest border sectors. For the referral measure, we again used data from the CAR to calculate the total number of checkpoint cases that result in a referral to a U.S. Attorney. We then divided that number by total apprehensions occurring at southwest border checkpoints. We noted discrepancies between Border Patrol’s reported performance and our analysis of the results of Border Patrol performance measures, and we discussed these discrepancies with Border Patrol officials responsible for checkpoint performance measurement. We attempted to analyze other aspects of checkpoint performance, such as apprehensions at checkpoints compared to apprehensions on circumvention routes and apprehension and seizures using methods of concealment. However, our ability to report on these measures for all checkpoints was limited because we identified inconsistencies through our data collection instrument in how those data are reported by checkpoints in southwest border sectors. We discussed the issues we found with Border Patrol headquarters officials responsible for oversight of checkpoint operations. We also developed additional measures intended to allow for comparisons between checkpoints, but certain data limitations hinder detailed quantitative analysis. As stated earlier, it is not possible to use the numbers of apprehensions and seizures made at checkpoints as the sole basis for comparison between checkpoints, because there are a number of factors and variables that can influence and impact checkpoint performance. For example, a checkpoint that accounted for 500 apprehensions is not necessarily better or more effective than a checkpoint that accounted for 50 apprehensions. The differences in apprehension totals between the checkpoints could be attributed to a number of factors that are outside of the control of the checkpoint, such as variations in operational hours and differences in traffic volume. As such, we developed measures that were intended to normalize or control for these variables. These measures included examining apprehensions and seizures on an operational hour basis, apprehensions and seizures per agent year, and apprehensions and seizures based upon the average annual daily traffic volume at the checkpoint. First, in the case of our operational hour analysis, checkpoints that were not operational as long as others appeared to perform better than checkpoints that were operational nearly 24 hours per day. For example, using this measure, the I-5 checkpoint in the San Diego sector is one of the best performing checkpoints. However, it is only operational, on average, 1.5 hours per day. Meanwhile, the checkpoint located on U.S. Route 281 in Falfurrias, Texas, seizes more drugs and apprehends more illegal aliens than the I-5 checkpoint, and is open 23 hours and 20 minutes every day, on average, but does not perform as well as the I-5 checkpoint using an operational hour measure. Therefore, while the I-5 checkpoint performs well using an operational hour analysis measure, one can assume that drugs and illegal aliens pass through that checkpoint in the hours that it is not operational. Second, we attempted to develop a cost effectiveness measure for permanent checkpoints that would examine apprehensions and seizures per agent work year. We chose this measure because a question that is frequently, if not almost universally, asked about government programs, is, “What is known about their cost effectiveness?” One potential measure of such cost effectiveness for the Border Patrol would be how much did it cost to apprehend a single person or seize illegal drugs in one checkpoint compared with other checkpoints or other Border Patrol activities? While this measure and others should not be taken in isolation as further guides to management decisions, knowledge of the basic costs of an agency’s key outcomes (such as apprehensions of illegal aliens) per unit of input (agent labor costs) can be part of the basis for improved allocation of resources. While such a performance measure can provide some information on cost effectiveness, some apprehensions or seizures may be considered more important to the agency than others. For instance, apprehending a drug smuggler or a terrorist might be considered more important than apprehending an illegal alien job seeker. Additionally, in attempting to develop this measure, we learned that at least 20 of the 32 permanent checkpoints on the southwest border have migrated to a four overlapping shift format, while the CAR is limited to reporting of three shifts. As a result, at least 20 permanent checkpoints are unable to accurately report the number of agents assigned to the checkpoint, limiting our ability to conduct an apprehension and seizure by agent work year analysis. In addition, the Border Patrol does not track the number of agents staffed to line watch and roving patrol operations, so we could not compare the performance of checkpoints (as measured by apprehensions and seizures per agent work year) to these other Border Patrol activities. Third, we attempted to conduct an analysis of permanent checkpoints’ apprehensions and seizures in relation to traffic volume. Because it could be assumed that checkpoints with high traffic volumes may also have high apprehension and seizure totals, such an analysis was an attempt to normalize for differences in traffic volume to determine if certain checkpoints have higher apprehension and seizure rates per traffic volume than others. Higher rates of apprehensions and seizures could indicate a more effective checkpoint—that is, one that is better able to detect illegal activity—or it could be due to volume of illegal traffic coming through the checkpoint. We attempted to use the traffic volume numbers reported by checkpoint in the CAR, but could not determine whether those numbers were reliable. Therefore, we accessed the online transportation databases for the four southwest border states and obtained average annual daily traffic volume for major highways in California, Arizona, New Mexico, and Texas. However, we could not conduct a comprehensive analysis for all checkpoints using this measure because (1) checkpoints were located at various distances from a traffic counter or (2) checkpoints (particularly tactical checkpoints) were on a highway that did not have a traffic counter. Regarding checkpoint placement and design, we met with officials from CBP Facilities Management and Engineering, Border Patrol Tactical Infrastructures, Border Patrol Southwest Operations Division, and Border Patrol sector and station offices to understand the checkpoint placement and design process and the roles and responsibilities of each office and component. We also reviewed available Border Patrol and CBP documentation describing the checkpoint placement and design process, such as the 2003 Border Patrol Facilities Design Guide and Border Patrol checkpoint policy. We assessed the extent to which the Border Patrol considered community impacts in the design and placement of checkpoints that were either (a) new permanent checkpoints constructed in the last 3 years, or (b) new permanent checkpoints currently under construction. We did not include all checkpoints in our analysis, because the guidelines and standards for checkpoint placement and design have changed over time, and it would not be appropriate to assess checkpoints that were built decades ago with current checkpoint placement and design guidelines. In addition, limited documentation is available for checkpoints constructed prior to 2006, according to Border Patrol and CBP officials. We did not include checkpoints that were or are being renovated or expanded, because they would not be subject to Border Patrol’s checkpoint placement guidelines. We also did not include tactical checkpoints in our analysis, because these lack permanent infrastructure. We also included in our analysis the planned I-19 permanent checkpoint, rather than all planned checkpoints, because of the extent of the controversy regarding that particular checkpoint. We obtained information on checkpoints that met our criteria from Border Patrol and CBP. Based on this information, and review of available documentation, we determined that three checkpoints met our criteria: (1) the I-35 checkpoint in the Laredo sector, which was completed in 2006, (2) the U.S. Route 62/180 checkpoint in the El Paso sector, which was completed in 2009, and (3) the U.S. Route 83 checkpoint in the Laredo sector—expected to be completed in October 2009. For each of these checkpoints, we reviewed available documentation related to the placement and design of these checkpoints, including Border Patrol Facilities Design Guide—which has a section for checkpoint design—and Border Patrol checkpoint policy. These documents describe Border Patrol’s guidelines for placement and design of checkpoint facilities, including where they should be located and the types of resources and capabilities that checkpoints should include. Border Patrol officials noted that these documents provide general guidance on checkpoint placement and design, rather than specific requirements. We also reviewed environmental assessments, which describe the Border Patrol’s rationale for selection of a particular site, information on consideration of environmental and community impact, and the Border Patrol’s coordination with various federal and state agencies. We also talked with CBP and Border Patrol headquarters officials and Border Patrol sector officials about how placement and design decisions were made for these checkpoints. Regarding the planned I-19 permanent checkpoint, we used the Border Patrol Facilities Design Guide and Border Patrol checkpoint policy as our primary basis for evaluating the placement and design of the I-19 checkpoint. We reviewed available documentation from Border Patrol’s Tucson sector regarding the placement factors considered in determining the location of the I-19 permanent checkpoint. To observe firsthand the possible checkpoint locations, we traveled along the I-19 corridor, from Nogales to Tucson, with Border Patrol officials who explained their rationale for tentatively choosing the KP 41 location, and why other sites were not suitable, in their view. We reviewed available documentation related to the design of the checkpoint, including a site plan which showed the layout of the proposed checkpoint and draft environmental assessments. We also met with Border Patrol officials about their rationale for the design for the checkpoint, including total size (footprint), resources, and size of various functional areas. We talked with officials from the Arizona Department of Transportation (ADOT) about their input and requirements for the I-19 permanent checkpoint location. We obtained and analyzed ADOT traffic projection data, which was developed by a contractor working for ADOT, and talked with ADOT engineers and the I-19 permanent checkpoint project manager about traffic projections. We also talked with officials and reviewed planning documents from the Santa Cruz County Department of Community Development to obtain information on plans for development in the areas near the proposed checkpoint location. In addition, we reviewed the recommendations on the design of the permanent I-19 checkpoint made by the Workgroup on Southern Arizona Checkpoints, and the Border Patrol’s responses to the recommendations. We also analyzed the Program Advisory for the I-19 permanent checkpoint, which was prepared by an engineering firm contractor to the Border Patrol. This document identifies space recommendations based on an assessment of checkpoint requirements, traffic capacity, apprehension and holding assessments, checkpoint operations, and number of staff. We met with the project manager for the I-19 checkpoint project to discuss these documents and the placement and design of the checkpoint. The project manager also provided square footage information for both the proposed I- 19 permanent checkpoint and the I-35 checkpoint in the Laredo sector, which allowed us to compare the sizes of the two checkpoints. We used the I-35 checkpoint as a basis for comparison because Border Patrol officials told us that the I-35 checkpoint was used as a frame of reference for the I-19 permanent checkpoint, and the I-35 checkpoint was also a large, permanent checkpoint. We also compared plans for the proposed I- 19 permanent checkpoint with other large checkpoints in terms of number of primary and secondary inspection lanes, and total property size (acreage). We obtained data on number of inspection lanes and checkpoint size from the Border Patrol and CBP, and found the data to be sufficiently reliable for reporting purposes. For other potential variables, such as number of buildings, total building square footage, and traffic volume, we found that data were not consistently available and therefore were not sufficiently reliable for reporting purposes. To determine if the Border Patrol followed its checkpoint placement guidelines regarding locating checkpoints in remote areas for the three checkpoints either constructed or under construction since 2006, we calculated the distances between each checkpoint and the nearest school and hospital, as listed in MapInfo’s institution data. To determine the reliability of the institution data for schools, we compared it to the Department of Education’s Common Core Data (CCD) for schools in the counties surrounding the checkpoints. We determined that the institution layer supplemented with data from the CCD was sufficiently reliable for our purposes. To determine the reliability of the institution data for hospitals, we compared it to a list of Medicare eligible hospitals in the counties surrounding the checkpoints. We determined that the institution layer supplemented with the Medicare Hospital data was sufficiently reliable for our purposes. We also used 2000 Census data to estimate the populations within 1 and 5 miles of each location. Population estimates were calculated by using MapInfo to draw a circle with a 1- or 5-mile radius around the checkpoint locations provided by the Border Patrol. These circles were then layered over 2000 Census block group-level population data. For each block group, we determined the proportion of the area that fell within the 1- or 5-mile radius of the checkpoint. The Census population for each block group that fell within the boundary of interest was multiplied by the proportion as an estimate of what proportion of the population in the block group lived within 1 or 5 miles of the checkpoint. The estimates for each block group were then added together to estimate the total population living around the checkpoint. For the planned I-19 permanent checkpoint, we calculated distances of four proposed checkpoint locations from the nearest school and hospital, and we used 2000 Census data to estimate the populations within 1 and 5 miles of each location. To assess the extent that the Border Patrol has considered community impacts in the operation of checkpoints, we reviewed Border Patrol operational guidance, policy documents, and training materials that describe Border Patrol standards and processes for monitoring and responding to circumvention activity. We also met with Border Patrol officials at the 15 checkpoints we visited to discuss their efforts to monitor and respond to circumvention activity and how they coordinate with nearby communities. To understand the extent that operations from Border Patrol checkpoints impact surrounding areas, we interviewed state and local law enforcement, business groups, community leaders, and other members of communities in the areas we visited to obtain their perspectives on impacts, if any, experienced by those who live or work within the areas surrounding checkpoints. In the five Border Patrol sectors we visited, we met with the following Fourteen law enforcement agencies in five sectors: Tucson sector: Arizona Department of Public Safety; Pima County Sheriff’s Department; Sahuarita Police Department; Santa Cruz County Sheriff’s Department; and Tucson Police Department. San Diego sector: California Highway Patrol; Oceanside Police Department; San Diego County Sheriff’s Department; and Temecula Police Department. Rio Grande Valley sector: Kenedy County Sheriff’s Department Laredo sector: Laredo Police Department and Webb County Sheriff’s Department. El Paso sector: Alamogordo Department of Public Safety and Doña Ana County Sheriff’s Department. Business organizations in three sectors: Temecula Chamber of Commerce (San Diego sector), Kingsville Economic Development Council (Rio Grande Valley sector), Tubac Chamber of Commerce and other Chamber of Commerce members who were participants in the Community Workgroup on Southern Arizona Checkpoints town hall meeting (Tucson sector). And ranchers and residents in three sectors (San Diego, Tucson, and Laredo) that we, or the Border Patrol, identified because they were landowners, residents, or business owners of the areas surrounding specific Border Patrol checkpoints. For each sector we visited, we attempted to identify local community organizations or community members who could provide insight into the impacts of checkpoint operations. However, in some cases—such as when checkpoints were located in areas that were rural and remote—we were unable to identify appropriate local organizations or community members that could provide insight on the impacts of checkpoint operations. In those cases we relied on the perspectives of local law enforcement officials that patrolled the area of jurisdiction around the checkpoint. In our meetings with these organizations and community members, we asked specific questions regarding the impacts from checkpoint operations and Border Patrol’s response to these impacts. Because the checkpoints and potential interviewees were a nonprobability sample, the results from our site visits cannot be generalized to other locations and checkpoints; however, what we learned from our site visits provided a useful background into the types of impacts that occur as a result of checkpoint operations. In the Border Patrol Tucson sector, there was a community group—known as the Community Workgroup for Southern Arizona Checkpoints—that was organized around issues relating to the I-19 checkpoint. Chaired by the Border Patrol sector chief and U.S. Congresswoman Gabrielle Giffords, the mission of the workgroup was to build a better understanding among southern Arizona communities on checkpoint operations and community impacts and to make recommendations on issues, concerns, and ideas regarding the current checkpoint and proposed permanent checkpoints. We reviewed documents from the workgroup and news articles that reported concerns of the community. While in the Tucson sector, we held a town hall style meeting for all workgroup members and others from the community. The town hall meeting was facilitated with a prepared set of questions to ensure that we obtained input regarding perceived community impacts from checkpoint operations. This was the only Border Patrol sector that had an organized and involved community group that had been actively discussing Border Patrol checkpoints, as far as we could determine. We attempted to determine the extent to which checkpoint operations can be linked to third-party indicators such as crime, economic, tourism, and property value data. Based on extensive research and analysis, we determined there were many limitations to drawing such causal links. Third-party indicators, such as these, are complex statistics impacted by numerous factors, many of which have little to do with border enforcement. It is difficult to further separate checkpoint operations from overall border enforcement, and data on crime, economic, tourism, and property values can fluctuate in ways that have no correlation to checkpoint operations, but may be influenced by other factors, such as the U.S. and global economies. Additionally, to understand any trends in these indicators there needs to be a complete set of historical data to develop a baseline understanding before interpreting factors that can change the baseline. If checkpoint operations could impact trends, data should be tracked for several years before and after a checkpoint is established to understand and control for external variables that may also be impacting trends. Given the community concerns regarding the checkpoint on the I- 19 highway in the Tucson sector, we collected some historical data on crime, business, and real estate values for communities close to the I-19 checkpoint, the checkpoint’s surrounding and nearest counties, and the state of Arizona. Those data are presented in the report and appendices simply to show overall trends, without controlling for checkpoint operation or other factors. We are unable to draw any conclusions from these data and cannot link checkpoint operations to any of these indicators. We also cannot infer that real estate values, tourism, or crime trends are better or worse for nearby communities since the checkpoint on the I-19 highway became fixed at the KP 42 location in November 2006. We determined that the property value, economic, tourism, and crime data used within the report and appendices were sufficiently reliable for providing historical trends and general descriptions of each of the below categories. To determine the reliability of these data, we reviewed existing information about the data systems and interviewed knowledgeable officials about the data, as available. Property value data. We obtained and reviewed data on property values from federal, state, and local agencies. At the federal level we reviewed available data on property values from several nationwide data sets, such as Federal Housing Finance Board, U.S. Department of Housing and Urban Development, Case-Shiller, National Association of Realtors, and U.S. Census Bureau, and determined that their level of geographic reporting was not specific enough to the areas of interest, such as Tubac and Green Valley. At the state level we reviewed available data from the Arizona Department of Commerce and the Arizona Tax Research Association, which provides annual publications on property tax rates and assessed values. The publication is completed every 2 years and compiles county- and district-level data on net assessed values for all properties, which is based on tax rates and levy sheets that are officially adopted by each of the County Board of Supervisors. The values provided to the Board of Supervisors comes from each of their Tax Assessor’s offices and are all calculated in the same way. Within this publication, Tubac is defined by the Tubac Fire District boundaries. We used available data from the Arizona Tax Research Association from 2000 to 2008, calculated percentage changes from year to year, and compiled the data into charts for reporting. At the county level, we reviewed median property values as provided by the Santa Cruz County and Pima County Tax Assessor’s Offices. Santa Cruz County Tax Assessor’s Office provided annual median property values for the county and the area of Tubac. Pima County Tax Assessor’s Office provided annual median property values for the county and the area of Green Valley, as defined by the Green Valley Fire District boundaries. Each of the offices use guidelines set by the Arizona Department of Revenue to determine median property value, which is calculated based on sales for each tax year and have an 18 month lag. For example, for tax year 2008, property sales data analyzed was from the time frame of January 1 through December 31, 2005, and January 1 through June 30, 2006. We used available data, calculated percentage changes from year to year, and compiled the data into charts for reporting. We also obtained Multiple Listing Service (MLS) data from Brasher Real Estate, Inc., a real estate company located in the Tubac area. MLS data is listings of sales of land and residential properties within specific geographic areas. We obtained data on sales in Tubac, Rio Rico, Amado, Nogales, Tumacacori, and Green Valley. We used available data to calculate quarterly totals and compiled the data into a chart for reporting. Because real estate values can be calculated in different ways we reported data on several indicators to provide a complete picture of property values in the various geographic areas. With each of these indicators it is important to note that there has been a significant housing market downturn nationwide that can affect any and all of these available data sets and we cannot draw any conclusion between checkpoint operations and the health of property values in a specific area. Economic data. We obtained and reviewed data from multiple state and national agencies, such as Arizona Indicators, Arizona Department of Commerce, U.S. Department of Labor, Bureau of Labor Statistics, and U.S. Department of Commerce, Bureau of Economic Analysis and U.S. Census Bureau. Each of these data sets track information by the North American Industry Classification System (NAICS), which is the system used to classify establishments by industry by the United States, Canada, and Mexico. Because art and tourism are important to the economy of Tubac, and concerns had been expressed regarding the impact of the Border Patrol checkpoint on the real estate industry in Tubac, we also collected data on the Accommodation and Food Services, Arts, Entertainment, and Recreation, and Real Estate and Rental and Leasing NAICS industries for each of the data sets. One limitation to using any type of economic data is that it is important to consider the context of the increases and decreases in percentage changes within the significant economic downturn faced nationwide. After reviewing available data sets, we compiled data and calculated the annual percentage change for each of the indictors: U.S. Department of Commerce, U.S. Census Bureau, County Business Pattern annual data on annual payroll, number of employees, and number of establishments, broken down by NAICS category, for the state of Arizona, Pima County, Santa Cruz County, and the area of Tubac, through the end of 2006. Data from 2007 were unavailable at the time of our report. One limitation to using these data is that the variation in number of establishments over time gives little sense of how big the establishments or variations are, for example, whether there were consolidations that reduced the number of establishments but not the level of economic output. U.S. Department of Commerce, Bureau of Economic Analysis annual data on the number of jobs and personal income, broken down by NAICS category, for the state of Arizona, Pima County, and Santa Cruz County, through the end of 2007. Annual state Gross Domestic Product data are also available through the end of 2007. Data for the Tubac area were not available. U.S. Department of Labor, Bureau of Labor Statistics, Quarterly Census of Employment and Wages quarterly and annual data on wages, broken down by NAICS category, for the state of Arizona, Pima County, and Santa Cruz County, through the end of 2007. Data for the Tubac area were not available. Although the Bureau of Economic Analysis and Bureau of Labor Statistics data were more current than the U.S. Census Bureau County Business Pattern data (as data were available for 2007 and 2008), data were not available at the ZIP code level—only for the county level. Therefore, we decided not to include those data within our report. Tourism data. The Arizona Office of Tourism provides data on Arizona’s tourism industry, compiling data at the state and county levels. For the state of Arizona, Pima County, and Santa Cruz County, we obtained and reviewed data from 1998 to 2008 on occupancy rates, average daily rates, and revenue per available room and 2005 through 2008 on lodging demand and supply. Data for the Tubac area were not available for these indicators. However, the Arizona State Parks collects data on the total number of visitors to all Arizona state parks, including a state park near Tubac. We obtained and reviewed data on total annual number of visitors from 2001 to 2008 for Tubac Presidio State Historic Park and Patagonia Lake State Park, which is also in Santa Cruz County. We used available data to calculate percentage changes from year to year, for each of the indicators, and compiled the data into various charts for reporting. Crime data. We obtained and reviewed 2004 through 2008 crime reporting from the Arizona Department of Public Safety, Pima County Sheriff’s Department, and Santa Cruz County Sheriff’s Department. We also obtained and reviewed 2004 through 2007 annual crime reporting from Federal Bureau of Investigation (FBI) Uniform Crime Reports for Pima County and the state of Arizona. Pima County and Santa Cruz County Sheriff’s Departments both provided additional district level data for us to review crimes that occurred within the areas closest to the I-19 checkpoint. We calculated the annual percentage change for major crime categories and compiled the data into various charts for reporting. We present the crime data to show overall trends and number of various types of offenses in the communities near the I-19 checkpoint, but cannot link any of these crimes to checkpoint operations, due to several important limitations. First, local law enforcement agencies we collected data from do not track the citizenship status of those arrested for crimes and could not identify which crimes were committed by illegal aliens. They also do not determine whether a crime was committed by someone attempting to circumvent the checkpoint. Accordingly, there is no way to determine if a particular criminal act was committed by an illegal alien that was attempting to circumvent the checkpoint or if the crime was unrelated to the checkpoint. Second, local law enforcement agencies we collected data from compile their crime data by county or by districts, not by a specific geographic region around checkpoints. As a result, these agencies could not provide data that would show the number and types of crimes that occurred within a certain radius around a checkpoint. In 2006, the Border Patrol convened a working group led by Border Patrol headquarters officials with participation from field representatives. This group identified 21 possible performance measures regarding checkpoint operations. These 21 possible performance measures were divided into four main groupings: At the border Quality of life The 21 performance measures and a description of each measure are listed below. 1. Ensure the traffic checkpoints are consistently operational in accordance with national and sector priorities and threat levels: This measure is to examine the percentage of time traffic checkpoints are operational compared to non-operational. 2. Maintain compliance with national Border Patrol checkpoint policy: This measure is to examine the percentage of time for each reason why traffic checkpoints are non-operational. 3. Determine effectiveness of canines at traffic checkpoints: This measure is to examine the number of smuggling events, both human and narcotics, at traffic checkpoints detected by canines compared to the number of smuggling events detected without canine assists. 4. Identify types of concealment methods used by smugglers at traffic checkpoints: This measure is to examine the number of apprehensions made at traffic checkpoints with concealment methods used compared to apprehensions without concealment methods. 5. Identify the number of aliens in smuggling loads: This measure is to examine the number of apprehensions in each smuggling load made at traffic checkpoints. 6. Utilize technologies in support of traffic checkpoint operations to identify the appropriate technology required for efficient checkpoint operations: This measure is to examine the number of apprehensions and seizures attributable to technology support for traffic checkpoint operations. 7. Examine the effectiveness of sensors on traffic checkpoint operations: This measure is to examine the number of apprehensions and seizures attributable to sensor activations when the traffic checkpoints are operational or non-operational. 8. Examine operating and maintenance cost effectiveness of checkpoint operations: This measure is to examine the cost effectiveness associated with operating and maintaining permanent traffic checkpoints compared to tactical traffic checkpoints. This measure is to also examine the cost effectiveness associated with the operating and maintenance of traffic checkpoint operations compared to the overall budget allocated for border enforcement activities. 9. Evaluate changes in patterns and trends to identify checkpoint circumvention routes: This measure is to compare the number of apprehensions at the traffic checkpoint to apprehensions on circumventing routes. 10. Compare checkpoint apprehensions to apprehensions from circumventing routes when the checkpoint is operational: The measure is to compare the number of apprehensions at the traffic checkpoint to apprehensions on circumventing routes. 11. Compare checkpoint narcotics seizures to narcotic seizures on circumventing routes when the checkpoint is operational: The measure is to compare the number of seizures at the traffic checkpoint to seizures on circumventing routes. 12. Monitor effects of checkpoint operation on other areas: This measure is to compare the percentage of apprehensions and seizures at traffic checkpoints to the apprehensions and seizures in adjacent zones or other zones impacted by checkpoint operations. 13. Examine the impact the operational checkpoint has on transportation check activities, such as aircraft, bus, or train checks: This measure is to compare the number of apprehensions from transportation checkpoints compared to when traffic checkpoints are operational and non-operational. 14. Examine the impact operational checkpoints have on staging areas (i.e., stash houses): This measure is to compare the number of apprehensions at staging areas when traffic checkpoints are operational or not operational. 15. Compare traffic checkpoint operation apprehensions to other enforcement activities: This measure is to examine the number of traffic checkpoint apprehensions compared to all other enforcement activities. 16. Compare traffic checkpoint operation seizures to other enforcement activities: This measure is to examine the number of traffic checkpoint seizures compared to all other enforcement activities. 17. Compare man-hours dedicated to checkpoint operations to man- hours dedicated to other enforcement activities: This measure is to compare the percentage of manpower used at traffic checkpoints to the manpower used at other enforcement activities. 18. Examine the reduction of major crimes in areas affected by checkpoint operations and beyond: This measure is to examine the number of apprehensions of major crimes in areas affected by traffic checkpoint operations compared to the number of major crimes in other border enforcement areas without traffic checkpoint operations. 19. Refer smugglers for prosecution: This measure is to examine the number of border related cases pertaining to traffic checkpoint operations referred to the U.S. Attorney (including state, county, and local attorneys) or not referred. 20. Coordinate with federal, state, local, and tribal agencies to support and improve border enforcement activities: This measure is to compare the number and type of events/cases that were referred to or notified for other agencies that are related to traffic checkpoint operations. 21. Examine the number and location of apprehensions turned over to the Border Patrol by other agencies when the checkpoint is operational to determine effect of operational checkpoint on communities: This measure is to compare the number of apprehensions turned over to Border Patrol by other agencies compared to when the traffic checkpoint is operational and non- operational. The following figures represent aerial photographs of the four potential checkpoint locations considered by the Border Patrol, on I-19, in southern Arizona. These photographs show the interstate, nearby roads, and the surrounding areas. In addition to the median property values that were included earlier in this report, we identified additional indicators for showing local trends in property values. We obtained multiple listing service (MLS) data, from a real estate agency in Tubac, and net assessed values, as reported by the Arizona Tax Research Association. MLS data provides listings for residential and land sales at the ZIP code level. The data show all listings within a ZIP code area, providing the listing prices, final sale prices, and number of transactions in specific geographic areas. The Arizona Tax Research Association publishes annual data on the total net assessed values for all properties in the state of Arizona. Net assessed value is the full cash value, or market value, of all real property in Arizona. According to MLS data, the median sales price for a home in Tubac has fluctuated from July 2006 to March 2009, as shown in figure 26. In 2008 the median sales price was approximately $384,000, and in 2007 it was $375,000. The net assessed value of properties in Santa Cruz County, Tubac, Pima County, and Green Valley have increased each year from 2000 to 2008, as shown in table 11 and figure 27. The net assessed value of properties in Santa Cruz County increased by 18 percent from 2007 to 2008, from approximately $341,684,000 to approximately $404,366,000. We identified indicators for showing local economic trends from the U.S. Census Bureau. The U.S. Census Bureau provides an annual series of County Business Pattern data available at the national, state, county, and ZIP code level and tracks the number of establishments, number of employees, and total payroll across industries. The data are derived from U.S. Census Bureau business establishment surveys and federal administrative records. These data are available through the end of 2006. The U.S. Census Bureau County Business Patterns provides subnational economic data, which covers most of the country’s economic activity, is used for studying the economic activity of small areas and analyzing economic changes over time, and is available by North American Industry Classification System (NAICS) industry. According to the Arizona Department of Commerce, art and tourism are important to the economy of Tubac, and concerns had been expressed regarding the impact of the Border Patrol checkpoint on the real estate industry in Tubac. Accordingly, the NAICS industries included within the following analysis are Accommodation and Food Services, Arts, Entertainment, and Recreation, and Real Estate and Rental and Leasing. In 2006, over half of the total 87 establishments in Tubac were retail trade and accommodation and food services, with 38 and 10 establishments, respectively, as shown in figure 28 and table 12. The four other industries with the highest numbers of establishments in Tubac are shown in figure 28, other services (except public administration) with eight establishments and construction, real estate, rental and leasing, and professional, scientific and technical services each with seven. From 2004 to 2006, the total number of establishments in Tubac increased from 67 to 87, as shown in figure 29. In 2006, the 87 establishments was a 16 percent increase from 2005, compared to a 1.3 percent increase for Santa Cruz County. With respect to the number of real estate, rental and leasing establishments from 2001 to 2006, Tubac consistently had fewer than 10 establishments, and Santa Cruz County ranged between 51 and 65 establishments. However, Pima County followed a similar pattern to the state of Arizona, as shown in figure 30. Figure 31 shows that in 2006, Tubac had 2 art, entertainment, and recreation establishments, compared to 305 in Pima County and 1,859 in the entire state of Arizona. From 2005 to 2006, Santa Cruz County had an increase in the number of accommodation and food service establishments, from 89 to 96, and Tubac had no change, with 10 establishments each year. Arizona and Pima County had percentage increases of 2 and 1 percent respectively, from 2005 to 2006, as shown in figure 32. In terms of number of employees, Tubac saw a decrease from 2004 to 2005, when compared to Santa Cruz County, Pima County, and the state of Arizona, as shown in figure 33. From 2005 to 2006, the number of employees in Tubac increased by 2 percent, while the number of employees in the state increased by 8 percent. With respect to total annual payroll, from 2004 to 2005 Tubac had a 1 percent decrease, while the state and counties had between 6 to 10 percent increases, as shown in figure 34. However, from 2005 to 2006, Tubac saw a larger percentage increase—19 percent, to $10,093,000—than the state and counties. The Arizona Office of Tourism provides information on tourism within the state and counties. It provides statewide and county data on occupancy rates, revenue per available room, and lodging supply and demand, through 2008. However, none of these indicators were available for the area of Tubac. Overall, occupancy rates for the state of Arizona, Pima County, and Santa Cruz County have been in a steady decline since 2006, with Santa Cruz County having the largest percentage decrease in 2008 occupancy rates, when compared to the others, as shown in figure 35. According to an Arizona Office of Tourism representative, the state and county downward trends in tourism are a part of the downward trends seen in the general economic climate in Arizona and that the overall demand for tourism has been decreasing, possibly due to a general downturn in the nationwide economy. In 2008, Santa Cruz County had a 62 percent occupancy rate for all lodging in the county. With respect to revenue per available room, the state of Arizona, Santa Cruz County, and Pima County followed similar trends from 2006 to 2008. From 2007 to 2008, all areas saw a decline in revenue per available room, with Santa Cruz County having the largest percentage decrease, as shown in figure 36. In 2008, Santa Cruz County was making $45 revenue per each available room, a decline from $50 the previous year. Regarding crime indicators, we obtained additional data from the Federal Bureau of Investigation (FBI) Uniform Crime Reporting (UCR) program, Pima County Sheriff’s Department, and Santa Cruz County Sheriff’s Department. Law enforcement agencies throughout the country—at the city, county and state levels—participate in the UCR program by providing summarized reports on eight major offenses, which include violent crimes and property crimes known to law enforcement, through the end of 2007, at the state and jurisdiction level. In addition to these eight crime categories, we obtained data on all other crimes from the Pima County and Santa Cruz County Sheriff’s Departments, which provide information on the frequency of offenses within the jurisdictions. In our discussions with each of these agencies, they told us that they do not attribute any of the below trends to checkpoint specific activities. Furthermore, the agencies do not track which offenses are committed by illegal aliens. According to FBI UCR data, from 2006 to 2007, the state of Arizona has seen a decline both in violent and property crimes, as shown in figure 37. Data on these crimes within the state of Arizona is presented to allow for comparisons to the local jurisdiction crime rates. From 2006 to 2007, Arizona’s decline in both violent crimes and property crimes went from approximately 316,000 to 310,000. According to offense data provided by Santa Cruz County Sheriff’s Department, total offenses in Santa Cruz County have declined from 2006 to 2008, as shown in figure 38. The Santa Cruz County Sheriff’s Department has three patrol districts: District 1 is the area of Rio Rico, which includes the I-19 corridor from Nogales to District 2; District 2 includes the I-19 checkpoint and Tumacacori, Carmen, Tubac, Amado, and Arivaca; and District 3 includes Sonoita, Elgin, Canelo, Lochiel, Mowery, and San Rafael Valley. As shown in figure 38, the majority of crimes in Santa Cruz County occur within District 1, which is the area of Rio Rico, with 2,085 total offenses in 2008, compared to 398 and 219 from Districts 2 and 3, respectively. From 2007 to 2008, District 1 had a 7 percent decrease in total offenses, District 2 had a 3 percent decrease, and District 3 had a 0.5 percent increase. With regards to violent crimes, from 2005 to 2008 District 2 has seen an increase each year, while the number of violent crimes within Districts 1 and 3 have fluctuated, as shown in figure 39. From 2007 to 2008, District 1 had an increase from 40 to 47 offenses, District 2 had an increase from 10 to 15, and District 3 had a decrease from 5 to 2 violent crime offenses. Property crime offenses increased in Districts 1 and 2 from 2004 to 2008, as shown in figure 40. More recently, between 2007 and 2008 District 1 had an increase from 281 to 303 offenses, District 2 had an increase from 42 to 58, and District 3 had an increase from 23 to 26. In addition to crime data on districts within Santa Cruz County, we also obtained crime data for the Pima County Green Valley District, which is adjacent to District 2 of the Santa Cruz County Sheriff’s Department and closest to the I-19 checkpoint. Figures 41, 42, and 43 present various crime data from Santa Cruz County Sheriff’s Department District 2 and Pima County Sheriff’s Department Green Valley District. From 2005 to 2008, the number of violent crimes within both districts has fluctuated, with no clear pattern emerging, as shown in figure 41. With respect to property crime data, the number of crimes within Green Valley District has varied from 2005 to 2008, while property crimes within Santa Cruz County District 2 have remained relatively stable over the same time period, as shown in figure 42. For the most recent quarter in which data are available, there were 147 property crime offenses in the Pima County Sheriff’s Department, Green Valley District, compared to 17 in the Santa Cruz County Sheriff’s Department, District 2. We also obtained cross-district data on criminal damage offenses, which also shows no clear trends in the number of offenses within each district from 2005 to 2008, as shown in figure 43. In the last quarter of 2008, there were 37 criminal damage offenses in the Pima County Sheriff’s Department, Green Valley District, compared to one in the Santa Cruz County Sheriff’s Department, District 2. The number of narcotics and drug related offenses in Santa Cruz County Sheriff’s Department, District 2, peaked in 2006, and has declined since then, as shown in figure 44. In 2008, there were a total of five narcotics and drug related offenses. In addition to data on major crimes, we also obtained data on selected other offenses and incidents within Santa Cruz County Sheriff’s Department District 2, from 2004 to 2008 (see table 13). In addition to the contact named above, Cindy Ayers, Assistant Director, and Adam Hoffman, Analyst-in-Charge, managed this assignment. Ryan MacMaster, Jim Russell, and Amy Sheller made significant contributions to the work. Michele Fejfar and Chuck Bausell assisted with design, methodology, and data analysis, and Melinda Cordero assisted with mapping analysis. Frances Cook and Christine Davis provided legal support. Pille Anvelt and Karen Burke developed the report’s graphics, and Katherine Davis assisted with report preparation.
The U.S. Border Patrol, part of the Department of Homeland Security's Customs and Border Protection (CBP), operates checkpoints on U.S. roads, mainly in the southwest border states where most illegal entries occur. As part of a three-tiered strategy to maximize detection and apprehension of illegal aliens, Border Patrol agents at checkpoints screen vehicles for illegal aliens and contraband. GAO was asked to assess (1) checkpoint performance and factors affecting performance, (2) checkpoint performance measures, (3) community impacts considered in checkpoint placement and design, and (4) the impact of checkpoint operations on nearby communities. GAO work included a review of Border Patrol data and guidance; visits to checkpoints and communities in five Border Patrol sectors across four southwest border states, selected on the basis of size, type, and volume, among other factors; and discussions with community members and Border Patrol officials in headquarters and field locations. Checkpoints have contributed to the Border Patrol's ability to seize illegal drugs, apprehend illegal aliens, and screen potential terrorists; however, several factors have impeded higher levels of performance. Checkpoint contributions included over one-third of the Border Patrol's total drug seizures, according to Border Patrol data. Despite these and other contributions, Border Patrol officials said that additional staff, canine teams, and inspection technology were needed to increase checkpoint effectiveness. Border Patrol officials said they plan to increase these resources. The Border Patrol established three performance measures to report the results of checkpoint operations, and while they provide some insight into checkpoint activity, they do not indicate if checkpoints are operating efficiently and effectively. In addition, GAO found that a lack of management oversight and unclear checkpoint data collection guidance resulted in the overstatement of checkpoint performance results in fiscal year 2007 and 2008 agency performance reports, as well as inconsistent data collection practices at checkpoints. These factors hindered management's ability to monitor the need for program improvement. Internal control standards require that agencies accurately record and report data necessary to demonstrate agency performance, and that they provide proper oversight of these activities. The Border Patrol generally followed its guidelines for considering community safety and convenience in four recent checkpoint placement and design decisions, including the proposed permanent checkpoint on Interstate 19 in Arizona. Current and projected traffic volume was a key factor in the design of the proposed Interstate 19 checkpoint, but was not considered when determining the number of inspection lanes for three recently completed checkpoints in Texas due to a lack of guidance. Having explicit guidance on using current and projected traffic volumes could help ensure that future checkpoints are appropriately sized. Individuals GAO contacted who live near checkpoints generally supported their operations but expressed concerns regarding property damage that occurs when illegal aliens and smugglers circumvent checkpoints to avoid apprehension. The Border Patrol is not yet using performance measures it has developed to examine the extent that checkpoint operations affect quality of life in surrounding communities. The Border Patrol uses patrols and technology to detect and respond to circumventions, but officials said that other priorities sometimes precluded positioning more than a minimum number of agents on checkpoint circumvention routes. The Border Patrol has not documented the number of agents needed to address circumventions at the proposed I-19 checkpoint. Given the concerns of nearby residents regarding circumventions, conducting a workforce planning needs assessment at the checkpoint design stage could help ensure that resources needed for addressing such activity are planned for and deployed.
The West Bank and Gaza cover about 2,400 square miles and have a combined population of about 4.6 million people. The West Bank has a land area of 2,263 square miles and a population of about 2.8 million. Gaza has a land area of 139 square miles and a population of about 1.8 million. The Palestinian Authority and Israel administer areas in the West Bank, and the Hamas-controlled de facto authorities control Gaza (see fig. 1). Since Hamas’ takeover of control in Gaza in June 2007, USAID has adjusted U.S. assistance to Gaza to take into account this factional and geographical split between Fatah and Hamas and to comply with U.S. law and policy. The U.S. government’s foreign assistance program in the West Bank and Gaza is designed, among other things, to support development assistance, provide critical infrastructure programming, and improve security conditions on the ground while reinforcing Palestinian respect for the rule of law. USAID’s role is to assist in building institutions for an eventual Palestinian state that result from a comprehensive peace agreement to promote a viable economy and to improve everyday lives of Palestinians, according to USAID. In September 2015, we reported on the five development sectors administered by the USAID mission from fiscal years 2012–2014. Our analysis from that report indicated the following information by sector: Water resources and infrastructure. The primary objective of USAID’s largest project in this sector is to focus on the rehabilitation and construction of roads, schools, water, and wastewater projects. Health and humanitarian assistance. The primary objective of USAID’s largest project in this sector is to focus on food security, including meeting food needs, enhancing food consumption, and increasing the dietary diversity of the most vulnerable and food- insecure non-refugee population. Democracy and governance. The primary objective of the largest project in this sector is to address infrastructure recovery needs through improvements in community infrastructure and housing, economic recovery, and development through the creation of income generation and business development opportunities. Private enterprise. The primary objective of USAID’s largest project in this sector is to strengthen the competitiveness and export potential of at least four sectors: agriculture and agribusiness, stone and marble, tourism, and information technology. Education. The primary goal of USAID’s largest program in this sector is to improve access to quality education and mitigate challenges to youth development in marginalized areas of the West Bank. According to USAID, since September 2015, the USAID mission has reorganized its work along three new lines: (1) governance and civic engagement; (2) water, energy, and trade; and (3) social services and humanitarian assistance. In March 2006, the USAID West Bank and Gaza mission approved and issued various antiterrorism policies and procedures for program assistance for the West Bank and Gaza in a document known as Mission Order 21, which it last updated in October 2007. In 2008, the USAID mission developed a key compliance review process to monitor compliance with antiterrorism policies and procedures. This process is reflected in formal mission notices. In response to federal laws and executive orders prohibiting assistance to entities or individuals associated with terrorism, in March 2006, the USAID mission adopted a key administrative policy document known as Mission Order 21. The stated purpose of Mission Order 21, last amended in 2007, is to describe policies and procedures to ensure that the mission’s program assistance does not inadvertently provide support to entities or individuals associated with terrorism. Such procedures include (1) vetting, (2) obtaining antiterrorism certifications, and (3) including specific mandatory provisions in award documents. Mission Order 21 is intended to balance development efforts in the West Bank and Gaza with ensuring that the assistance does not benefit entities or individuals who engage in terrorist activity, according to a senior USAID official. The vetting requirements in Mission Order 21 apply to certain contractors and subcontractors, recipients of grants and cooperative agreements, trainees / students, and recipients of cash or in-kind assistance, with some exceptions. All program awards are required to have a reference to Mission Order 21, according to USAID. Mission Order 21 requires that certain individuals and non-U.S. organizations undergo vetting, which involves checking their names and other identifying information against databases and other sources to determine if they have any identified links to terrorism. Non-U.S. organizations are cleared by vetting their key individuals regardless of nationality, including U.S. citizens. The vetting process provides reasonable assurance that program assistance is “not provided to or through any individual, private or government entity, or educational institution that is believed to advocate, plan, sponsor, engage in, or has engaged in, terrorist activity.” Applicable vetting is required before an award is made or assistance is provided. Appendix II provides more detailed information on USAID’s vetting process. Mission Order 21 Vetting Requirements Mission Order 21 requires USAID’s West Bank and Gaza mission to vet the following: All non-U.S. prime awardee and subawardee organizations or individuals proposed for a contract or subcontract above $25,000. The $25,000 threshold is cumulative for multiple awards to the same organization or individual within a rolling 12-month period. All non-U.S. prime awardee and subawardee organizations or individuals (other than public international organizations) proposed to receive cash or in-kind assistance under a cooperative agreement, grant, or subgrant, regardless of the dollar amount. All non-U.S. individuals who receive USAID-financed training, study tours, or invitational travel in the United States or third countries, regardless of the duration; or who receive training in the West Bank and Gaza lasting more than 5 consecutive work days. All entities or specifically identified persons who directly receive other forms of cash or in-kind assistance, with the following exceptions (these thresholds apply to assistance per occasion): individuals who receive jobs under employment generation activities, individuals who receive cash or in-kind assistance of $1,000 or less, organizations that receive cash or in-kind assistance of $2,500 or less, households that receive micro-enterprise loans or cash or in-kind assistance of $5,000 or less, and, vendors of goods or services acquired by USAID contractors and grantees in the ordinary course of business for their own use. Non-U.S. organizations are cleared by vetting their key individuals regardless of nationality, including U.S. citizens. In addition, Mission Order 21 also provides that even if vetting would not otherwise be required under these rules, vetting will be conducted whenever there is reason to believe that the beneficiary of assistance or the vendor of goods or services commits, attempts to commit, advocates, facilitates, or participates in terrorist acts, or has done so in the past. Mission Order 21 provides specific details on how vetting procedures will be operationalized and the information implementing partners need to provide to specific entities within the USAID mission. Attachments to Mission Order 21 include a form that prime awardees must use to provide the particular details necessary to conduct vetting of an individual or entity as well as required language that must be incorporated in USAID-funded awards for the West Bank and Gaza program. Mission Order 21 requires that all U.S. and non-U.S. organizations sign an antiterrorism certification before being awarded a grant or cooperative agreement to attest that the organization does not provide material support or resources for terrorism. The antiterrorism certification is generally an attachment to the award documentation that certifies, in part, that the “recipient did not provide…and will take all reasonable steps to ensure that it does not and will not knowingly provide material support or resources to any individual or entity that commits, attempts to commit, advocates, facilitates, or participates in terrorist acts.” Mission Order 21 requires that all prime awards and subawards for contracts, grants, and cooperative agreements contain two mandatory provisions (which are included as clauses in award documents): a provision prohibiting support for terrorism and a provision restricting funding to facilities that recognize or honor an individual or entity that commits or has committed terrorism. These two mandatory provisions inform awardees of their legal duty to (1) “prohibit transactions with, and the provisions of resources and support to, individuals and organizations associated with terrorism” (antiterrorism clause) and (2) restrict “assistance for any school, community center, or other facility named after any person or group of persons that has advocated, sponsored, or committed acts of terrorism” (facility naming clause). Both mandatory clauses must be incorporated in agreements at the time of signature. In July 2008, the USAID mission established a post-award compliance review function under the Office of Contracts Management to assess implementing partners’ compliance with the requirements of the antiterrorism procedures contained in Mission Order 21 when making subawards. This function was detailed to implementing partners in a July 2008 notice issued by the USAID mission. In 2009, we reported that USAID had enhanced its Mission Order 21 oversight efforts by hiring a compliance specialist and implementing a new compliance review process that provides additional assurance over contract and grant management. These recurring, detailed reviews were developed specifically to examine implementing partners’ subaward compliance with Mission Order 21 in USAID’s program assistance for the West Bank and Gaza. Since 2009, the internal compliance review process has been an essential control function that allows USAID to provide reasonable assurance that all prime awardees are in compliance with all applicable requirements when making subawards and providing funding for trainees. The compliance specialist uses a checklist to assess implementing partners’ subaward compliance in four categories: (1) the proper vetting of subawardees and beneficiaries, (2) the timely incorporation of the antiterrorism certificate, (3) the timely incorporation of applicable mandatory provisions, and (4) monthly subaward reporting. To conduct these compliance reviews, the compliance specialist assesses policies, procedures, and program activities associated with an awardee, interviews relevant implementing partners’ staff, conducts periodic site visits, and inspects subaward documentation. The compliance specialist produces an official compliance review report and provides feedback to the prime awardee regarding any weaknesses in compliance identified during the review. According to these reports, throughout the review process the compliance specialist educates relevant prime awardee staff members about the Mission Order 21 requirements and informally shares best practices and suggestions with the prime awardee to help improve compliance in the future. In addition to identifying weaknesses in compliance, the reports also include a general observations section documenting noncritical, compliance-related issues identified during the review process. These observations are organized into three categories: (1) subaward reporting, (2) internal control over compliance with Mission Order 21, and (3) the cross-referencing of incorporated special mandatory provisions. This general observations section includes recommendations on how prime awardees can improve their policies and procedures to strengthen their compliance environment and avoid compliance-related issues in the future. Implementing partners are granted 2 weeks following receipt of the compliance review report to provide a written response to explain the reasons for any identified weaknesses in compliance and to outline the corrective actions the prime awardee will take to mitigate them, according to USAID. The compliance specialist follows up with the implementing partner to ensure that responses to address any identified weaknesses in compliance are submitted on time and to check on the sufficiency of the corrective actions stated by the implementing partner, according to USAID. USAID officials told us that following up to ensure that all weaknesses in compliance have been sufficiently resolved is a key aspect of the overall compliance review process. Failure to comply with vetting, as outlined in Mission Order 21, may lead to disallowance of costs incurred by the prime awardee if the organization or individual in question is found to be ineligible to receive USAID funds, according to USAID. The compliance specialist, the acquisition supervisor, and the director of the Office of Contracts Management meet with senior USAID mission officials annually to present the outputs, analysis, and notable findings of the compliance review cycle, according to USAID. In addition, common issues identified during the compliance reviews are shared with the mission’s Program Support Unit and the Resident Legal Officer so that they can address such issues in future Mission Order 21 training sessions for prime awardees. The compliance review function is a key control in the mission’s assistance program because it assesses the quality of the mission’s antiterrorism oversight over time. The compliance review process and procedures are described in a series of stand-alone documents, such as notices issued by the mission to implementing partners involved in assistance programs (see fig. 2). For example, in December 2012 the mission issued a notice to implementing partners detailing new compliance review protocols that expand the scope of the compliance reviews to include a better understanding of the implementing partner’s internal controls in addition to Mission Order 21 compliance. This and other pertinent formal notices are posted on USAID’s West Bank and Gaza website. According to officials in the mission, the contents of the notices and compliance with Mission Order 21 are discussed in each program award orientation meeting with implementing partners. No new formal notices related to the compliance review have been issued since the end of 2012 because the latest guidance remains effective and there have been no changes to the process since the issuance of the latest notice, according to USAID. USAID officials told us that they anticipate updating Mission Order 21 at some point in the future to reflect lessons learned from implementation of a joint USAID and State Partner Vetting System Pilot Program that vets both U.S. and non-U.S. persons, as well as lessons learned from ongoing vetting programs for the West Bank and Gaza, Afghanistan, and Syria assistance. One purpose of the pilot program is to help assess the extent to which partner vetting adds value as a risk mitigation tool, and if so, under what circumstances vetting should occur, according to USAID. Under the pilot program, USAID will test vetting policies and procedures, evaluate the resources required for vetting, and seek input from implementing partners, Congress, and other stakeholders about the impact of vetting on USAID-funded delivery of foreign assistance. USAID currently is implementing the pilot program in Guatemala, Kenya, Lebanon, the Philippines, and Ukraine. GAO reviewed USAID’s compliance reviews, the official reporting documents created during the compliance review function described above. That review and GAO’s examination of prime awards and a generalizable sample of subawards from fiscal years 2012–2014 found that USAID generally complied with requirements for vetting as well as inclusion of required antiterrorism certification and mandatory provisions in awards. Our review was based on the following documentation relating to prime awards and subawards: USAID’s internal compliance reviews of 24 prime awardees and the more than 14,000 subawards that they made. The compliance review reports USAID provided to us identified some weaknesses in prime awardees’ compliance with all aspects of Mission Order 21 requirements when making subawards and providing funding for trainees, including vetting, antiterrorism certification, and mandatory provisions. However, according to USAID officials, all noncompliance weaknesses identified in the compliance review reports for active awards were addressed as part of the overall compliance review process. According to USAID, prime awardees are required to amend applicable subaward documentation to incorporate the mandatory provisions if the subawards are ongoing and active. Prime awardees are not required to amend documentation for subawards that have already expired and are no longer active. GAO’s review of 48 prime awards and a random generalizable sample of 158 subawards associated with these prime awards, covering the period of fiscal years 2012–2014. We found that USAID complied with the three applicable Mission Order 21 requirements for all prime awards we reviewed. In addition, we found that 155 of the 158 subawards reviewed in our random generalizable sample complied with applicable Mission Order 21 requirements. Below, we discuss in more detail the findings of each set of documentation, in terms of vetting and inclusion of required antiterrorism certification and mandatory provisions in awards. In addition, we discuss an instance where the USAID mission during the course of our review self-reported an error in vetting that was subsequently resolved. USAID’s internal compliance review reports identified instances of noncompliance with applicable vetting. In the universe of 14,436 subawards assessed by the compliance review reports provided by USAID, 1 prime awardee failed to vet a subawardee. In addition, 4 prime awardees collectively failed to vet a total of 18 non-U.S. individuals taking part in USAID-funded trainings in the West Bank (see table 1). Specifically, one of these prime awardees did not obtain valid vetting approval for 15 students participating in a U.S.- funded academic program. These prime awardees were required to address all noncompliance weaknesses and obtain the proper vetting approvals for the subawardee and all applicable trainees, according to USAID. The compliance review reports also identified 11 prime awardees that obtained late vetting approval, after the subawards were signed, across 23 subawards. In addition, 3 prime awardees conducted similar late vetting for 219 USAID-funded trainees. Most of these instances of late vetting for trainees occurred when a single prime awardee failed to obtain valid vetting approval for 167 non-U.S. individuals prior to the start date of their USAID-funded academic program. USAID’s compliance review reports identified one prime awardee that obtained late vetting approval targeting 4 beneficiaries of direct cash or in-kind assistance. According to USAID, all noncompliance weaknesses in vetting procedures identified in the compliance review reports provided to GAO have been resolved, and there were no instances of USAID providing funding to any individual or entity that ultimately did not pass vetting. GAO’s review found that prime awards were in compliance and subawards were generally in compliance with vetting requirements. We found that 11 of the 48 prime awards we reviewed required vetting according to Mission Order 21 because they were with non-U.S. organizations and, if contracts, had a value of more than $25,000. Our review of vetting information provided by USAID found that the vetting was conducted for all 11 of these prime awardees, and eligibility decisions were made prior to the signing of the awards, consistent with Mission Order 21. We also found that 29 of the 91 subawards (in our universe of 158 subawards) that went to non-U.S. organizations had a contract value or a time and cost amendment value of more than $25,000 and thus required vetting. Based on vetting information provided by USAID, vetting was conducted and eligibility decisions were made prior to the signing of the award in 28 out of the 29 instances, in compliance with Mission Order 21. However, vetting was obtained in 1 instance after the award was signed. USAID’s internal compliance review reports identified one instance of noncompliance with antiterrorism certification requirements. The compliance review reports identified a single instance where a prime awardee failed to obtain an antiterrorism certificate from a subawardee. According to USAID officials, this prime awardee was required to amend the subaward paperwork to include the antiterrorism certificate. GAO’s review found that prime awards and subawards were in compliance with antiterrorism certification requirements. We found that 16 of the 48 prime awards were grants or cooperative agreements and thus required a signed antiterrorism certification. All 16 prime awards contained a signed antiterrorism certification that was signed in advance of the award. We found that 4 of the 158 subawards were grants or cooperative agreements and therefore required an antiterrorism certification. All 4 subawards contained a signed antiterrorism certification that was signed in advance of the award. USAID’s internal compliance review reports identified noncompliance with the two mandatory provision requirements. The reports identified 9 prime awardees that collectively made a total of 449 subawards without the two mandatory provisions included. Specifically, the majority of these instances were the result of a single prime awardee failing to include the mandatory provisions in 378 of its subawards. According to USAID officials, the 9 prime awardees were required to amend the subaward paperwork to include the mandatory provisions if the awards were still active. GAO’s review found that prime awards were in compliance with the two mandatory provisions requirements and the subawards were generally in compliance. All 48 prime awards made by USAID contained the mandatory provision antiterrorism clause and the facility naming clause in the award documents. Of the 48 prime awards, 2 were made to the United Nations, which is defined as a public international organization, and contained differently worded clauses than for nongovernmental organizations. We found that 155 of the 158 subawards, or 98 percent, included the mandatory antiterrorism clause and facility naming clause. Specifically, based on the subaward documents provided by USAID, we found one instance where the antiterrorism clause and facility naming clause were not present in the award documentation. We also found two instances where the facility naming clause was not included in the award documentation. We estimate that there are 100 errors in our overall subaward universe population of 8,744 as it relates to whether mandatory clauses are present in the subawards. During the course of our review, the USAID mission contacted us after identifying an instance in which USAID erroneously provided funds to an organization that did not pass vetting, but subsequently determined that there was no indication of misuse of funds. According to the mission, the vetting error involved a previously cleared subawardee that had a change in a key individual in September 2014. A USAID official in the Program Support Unit entered the new individual’s information manually into the Partner Vetting System and in the process of cross-referencing records, the official mistook the new key individual for a previously vetted and cleared key individual of the organization, as these two individuals had similar names. As a result, the vetting package submitted to the USAID Office of Security’s Counterterrorism Branch erroneously included the formerly cleared individual and not the new individual. In June 2015 the subawardee resubmitted information to USAID for vetting because of another change in key individuals. The Program Support Unit compiled the vetting package and submitted it to the USAID Office of Security’s Counterterrorism Branch, which then sent back an ineligibility recommendation for one of the key individuals listed for the subawardee. According to USAID, a review of the Partner Vetting System audit trail led to the conclusion that in 2014 this key individual was mistaken for a former key individual with a similar name. According to the mission, it immediately communicated the ineligibility decision to the prime awardee, who communicated this information to the subawardee and accordingly did not proceed with the proposed award extension. According to mission officials, once they became aware of the error in vetting, the mission performed a financial assessment to determine if there was any misuse of the U.S. government funds, which had been provided to the subawardee, totaling about $77,000. The review concluded that adequate supporting documents were presented for the payments, both the prime awardee and subawardee had adhered to the terms and conditions of the subawards when making disbursements of funds, and therefore there was no indication of misuse. According to USAID, the mission promptly implemented a new policy to prevent such human error from recurring and also promptly notified both GAO and USAID’s Regional Inspector General of the error. We provided a draft of this report to State and USAID for comment. State provided no comments, and USAID provided technical comments that were incorporated, as appropriate. We are sending copies of this report to appropriate congressional committees, the Administrator of USAID, and the Secretary of State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix IV. This report examines the extent to which (1) the U.S. Agency for International Development (USAID) has established antiterrorism policies and procedures for program assistance for the West Bank and Gaza and (2) USAID complied with requirements for vetting, antiterrorism certification, and mandatory provisions for program assistance for fiscal years 2012–2014. To examine the extent to which USAID has established antiterrorism policies and procedures for program assistance for the West Bank and Gaza, we identified and reviewed relevant legal requirements as well as USAID policies and procedures to comply with those requirements. These legal and other requirements are contained in U.S. federal laws and executive orders. Mission Order 21 is the primary document that details the procedures to comply with applicable laws and executive orders to help ensure that assistance does not provide support to entities or individuals associated with terrorism. The effective date of the most recent version of Mission Order 21 is October 3, 2007, and it has not been updated since then, according to USAID officials. We also reviewed memorandums and notices issued by the USAID West Bank and Gaza mission that pertain to USAID’s antiterrorism compliance review process and reminders about Mission Order 21 updates. To examine the extent to which USAID complied with its antiterrorism policies and procedures for program assistance for the West Bank and Gaza, we interviewed officials from USAID’s Office of Inspector General regarding its oversight requirements, and results of audits of West Bank and Gaza assistance programs against Mission Order 21 requirements. We examined documentation on USAID’s policies and procedures for monitoring prime awardees’ compliance with Mission Order 21 when making subawards and reviewed USAID’s formal compliance review process. We looked at 23 audit reports conducted under the direction of USAID’s Regional Inspector General and covering all prime awardees that received fiscal year 2012, 2013, or 2014 Economic Support Fund (ESF) assistance. We also reviewed and analyzed all 47 compliance review reports provided to us that were conducted by USAID’s compliance specialist on 24 prime awardees during fiscal years 2012, 2013, and 2014. We analyzed these antiterrorism compliance review reports to assess and compile all noncompliance weaknesses with Mission Order 21 identified by USAID in four categories: (1) the proper vetting of subawardees and beneficiaries, (2) the timely incorporation of the antiterrorism certificate, (3) the timely incorporation of applicable mandatory provisions, and (4) subaward reporting. We also followed up with relevant USAID officials regarding these identified noncompliance weaknesses and the policies and practices USAID had in place to ensure that such weaknesses were addressed. To determine the extent to which the USAID West Bank and Gaza mission complied–at the prime and subaward levels–with its vetting requirements as well as inclusion of antiterrorism certification and mandatory provisions for program assistance to provide reasonable assurance that its programs do not provide support to entities or individuals associated with terrorism, we reviewed key legal and other requirements as well as USAID’s policies and procedures for ensuring compliance with Mission Order 21. We discussed the USAID mission’s implementation of Mission Order 21 with the USAID Deputy Mission Director, senior staff, the regional legal advisor, program staff, and other officials responsible for managing assistance projects and overseeing contracts, grants, and cooperative agreements at the USAID mission in Tel Aviv, Israel, and the U.S. Consulate in Jerusalem. We also interviewed several of USAID’s implementing partners that had received relatively large dollar contracts from USAID. In addition, we interviewed State, USAID, and other officials involved in vetting USAID award recipients. We focused our review on the mission’s prime award contracts, grants, and cooperative agreements that were made using Economic Support Fund (ESF) programming for fiscal years 2012–2014 as well as applicable subawards made by the prime awards during this time period. We selected this time period because it covers the last fiscal year that we reported on in our 2012 report and also represents the most recently available data. The mission provided us with copies of all 48 prime awards issued during this time frame and the relevant documentation to support proof of vetting of key individuals and the presence of antiterrorism certifications and mandatory provisions in awards. To determine whether subawards complied with relevant Mission Order 21 requirements, we examined a final random generalizable sample of 91 subawards made to non-U.S. organizations and 67 subawards made to U.S. organizations for a total of 158 subawards. Initially, we selected a random sample of 174 subawards. However, the random sample decreased to 158 subawards because of various issues such as missing data, duplicates of awards, and errors identified by the mission in subaward reporting by the prime awardee. We selected these random generalizable samples from a universe of 8,744 subawards for fiscal years 2012 through 2014 identified by the mission based on subaward activity reported to the mission by prime awardees. The universe included 8,521 non-U.S. organizations and 223 U.S. organizations. The mission developed the universe by taking the 48 prime awards that we had received and reviewed and identifying the corresponding subaward reports made by each prime award. Some of the prime awards did not make any subawards during the time frame that we examined. In total, the mission identified 37 of the 48 prime awards that had subawards reported. According to the mission, the subaward reports track the subaward awarded in a certain period of time and have no association with the fiscal year funding it received. Further, according to the mission, their main objective in developing the subaward universe for us was to track the vetting threshold by including all the individual subawards as well as their cost and time modifications that could trigger the vetting requirement per Mission Order 21. As a result, our subaward sample included several cost modifications and time extensions to awards. We reviewed vetting information provided by USAID for all 11 prime awards made to non-U.S. organizations and a sample of 29 subawards made to non-U.S. organizations. The remaining 37 prime awards and 129 subawards were made to U.S. organizations, and were therefore not subject to vetting. We compared the vetting date to the award date to determine if the mission vetted the appropriate non-U.S. organizations prior to the date of award. We found that of the 91 subawards in our sample, 29 subawards went to non-U.S. organizations that had a contract value or a time and cost amendment value of more than $25,000 and thus required vetting. Based on vetting information provided by USAID, vetting was conducted and eligibility decisions were made prior to the signing of the award in 28 out of the 29 instances, in compliance with Mission Order 21. However, vetting was conducted in 1 instance after the award was signed. To understand USAID’s vetting process, we interviewed various mission officials, including the head of the Program Support Unit, which is the division responsible for the vetting process. We also reviewed snapshots of the Partner Vetting System (PVS), the system in which partner information is inputted, as well as training material related to the PVS to understand the vetting process. To determine whether required antiterrorism certifications were obtained, we reviewed applicable documentation provided by USAID for 16 prime awards and 4 subawards that were grants or cooperative agreements to determine if antiterrorism certifications were included in the award and signed prior to the date of the award. We determined that the16 prime awards and the 4 subawards contained a signed antiterrorism certification that was signed in advance of the award. To determine whether the prime awards and subawards contained mandatory provisions, specifically two mandatory clauses, we reviewed applicable documentation for each award to determine if the clauses were present. We reviewed 48 prime awards to determine if both the antiterrorism and facility naming clauses were present in the award before it was signed. We determined that all 48 prime awards had the mandatory clauses in the award before it was signed. For the subawards, we reviewed all 158 subawards to determine if mandatory clauses were present in the awards before it was signed. We used electronic searches to identify copies of the two clauses as efficiently as possible. We obtained the award documents from USAID in the form of scanned PDF files and used Adobe Acrobat Pro XI to convert them into machine- readable text. This conversion was generally reliable but sometimes introduced misspellings or other anomalies. We wrote Python code that performed keyword searches on each of the 260 PDF files for apparent instances of the two clauses. Each time the program found a potential match, it computed the edit distance between the clause the search identified and the actual boilerplate clause. We then identified the candidate match for each award with the shortest edit distance from each clause and produced a document listing the best potential match for each clause in each award and a link to the PDF page from which we extracted each potential match. The search program treated all of the PDFs associated with a single award as a group and identified candidate matches, for example very likely matches, for both clauses in almost all of the awards. A GAO analyst manually reviewed each potential match on the original PDF and either confirmed that it was the required clause or not. If the computer did not find a match, we reviewed the entire award document to determine whether the required clause was present. Based on our review, we found three instances where mandatory clauses were missing from the award. We conducted this performance audit from July 2015 to April 2016, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. This appendix provides information on the vetting process for awards to non-U.S. implementing partners (awardees) receiving U.S. government funding, including contracts, grants, cooperative agreements, and training, based on USAID documents and information from officials. A typical vetting process starts with the implementing partner, or prime awardee, submitting through an online portal, the Partner Vetting System (PVS), a completed Partner Information Form that has the names and identifying information of the organization’s key individuals, according to the USAID mission. The prime awardee has access to the online portal to submit the Partner Information Form and also collects and submits the needed vetting data from proposed recipients of subawards. Figure 3 provides details of the steps in the vetting process for awards. In a small number of cases, the implementing partner, or prime awardee, does not have access to the online portal, and hard copy forms are sent directly to USAID’s vetting team, known as the Programs Support Unit (PSU), which inputs the information into the PVS. The information submitted online or on hard copy is checked by vetting assistants at the USAID mission to ensure that it is complete and is a valid request. The information is compiled for a vetting package that is submitted through the portal to the USAID Office of Security’s Counterterrorism Branch (SEC/CT) at the Terrorist Screening Center in the United States. Until August 2015 the vetting package to be submitted to the SEC/CT was compiled by one member of the PSU team. In response to a vetting error identified in July 2015, the USAID mission implemented a new policy that requires an additional check of vetting packages submitted to the SEC/CT. The new process requires that a separate member of the PSU team verify that packages submitted to the SEC/CT include all key individuals listed in the Partner Information Form. If the organization or individual submitted for vetting is found to have no derogatory information by the SEC/CT, analysts at the SEC/CT enter an eligible determination into PVS. If the proposed award is a contract or training, the PVS generates an automatic notification to the Contracting/Agreement Officer’s Representative (C/AOR) and the vetting is a one-step process. The C/AOR notifies the awardee of the results. If the proposed award is a cash grant or in-kind assistance, following an eligible recommendation from the SEC/CT, the request is then sent to the Consulate General for a second vetting step. If the organization vetted by the Consulate General is also deemed eligible, results are entered into PVS, and an automatic notification is sent to the C/AOR who notifies the awardee of the results. If the SEC/CT finds derogatory information related to an organization or individual submitted for vetting, the SEC/CT analyzes the information to determine if an ineligible recommendation is warranted. If an ineligible recommendation is warranted, the SEC/CT, drafts an assessment of the derogatory information to the Supervisory Program Support Specialist, according to the USAID mission. The Consulate General follows a similar notification process if an organization submitted for vetting results in an ineligible recommendation. In both cases, the Supervisory Program Support Specialist reviews the derogatory information and consults with key mission vetting officials who have been granted the appropriate security clearance and have a need- to-know. The C/AOR may also be asked to provide an impact assessment to evaluate the potential consequences for the implementation of the program should a particular prospective implementing partner be found ineligible. If the mission would like to consider an award notwithstanding an ineligible finding by the SEC/CT, the mission refers the case to the Vetting Working Group, located in the U.S. Consulate General in Jerusalem. The Vetting Working Group is a multiagency group, responsible for reconciling derogatory vetting information obtained by U.S. agencies implementing programs in the West Bank and Gaza, according to the mission. The group meets on an ad-hoc basis and recommends eligibility or ineligibility based on consensus, with the final decision made by the Consul General. For cases that are not referred to the Vetting Working Group, the Deputy Mission Director has the authority to make final ineligibility decisions, according to the mission. Once a final determination is made by either the Consulate General or the Deputy Mission Director, the Supervisory Program Support Specialist enters this determination into PVS and an automatic notification is sent to the C/AOR. The Contracting/Agreement Officer Representative notifies the awardee of the results. If a program awardee has been approved through the vetting process, the approval generally remains valid for that particular award for up to 3 years from the date of the award. However, new vetting is required in several circumstances. First, vetting is required if there is a change in the awardee’s key individuals. Key individuals include principal officers of the organization’s governing body, the principal officer and deputy principal officer of the organization, the program manager or chief of party, and any other persons with significant responsibility for administration of USAID- financed activities or resources. Second, new vetting is also required for any new awards or extensions of existing awards if more than 12 months have passed since the awardee was last approved. Third, new vetting is required for cost extension of awards when the total cost of the subaward including with the additional cost exceeds $25,000. USAID may rescind vetting approval if the agency obtains information that an awardee or any of the key individuals is or has been involved in terrorist activity, according to USAID. In addition to the contact named above, Judy McCloskey (Assistant Director), Andrea Riba Miller (Analyst-in-Charge), Bryan Bourgault, and Ria Bailey-Galvis made key contributions to this report. Ashley Alley, Martin de Alteriis, Justin Fisher, Jeffrey Isaacs, Debbie Chung, Brian Egger, Robert Letzler, and Oziel Trevino provided additional assistance. Foreign Aid: U.S. Assistance for the West Bank and Gaza for Fiscal Years 2012–2014. GAO-15-823. Washington, D.C.: September 22, 2015. Foreign Assistance: U.S. Assistance to the West Bank and Gaza for Fiscal Years 2010 and 2011. GAO-12-817R. Washington, D.C.: July 13, 2012. Foreign Assistance: U.S. Assistance to the West Bank and Gaza for Fiscal Years 2008 and 2009. GAO-10-623R. Washington, D.C.: May 14, 2010. Foreign Assistance: Measures to Prevent Inadvertent Payments to Terrorists under Palestinian Aid Programs Have Been Strengthened, but Some Weaknesses Remain. GAO-09-622. Washington, D.C.: May 19, 2009. Foreign Assistance: U.S. Assistance to the West Bank and Gaza for Fiscal Years 2005 and 2006. GAO-07-443R. Washington, D.C.: March 5, 2007. Foreign Assistance: Recent Improvements Made, but USAID Should Do More to Help Ensure Aid Is Not Provided for Terrorist Activities in West Bank and Gaza. GAO-06-1062R. Washington, D.C.: September 29, 2006.
Since 1993, the U.S. government has committed more than $5 billion in bilateral assistance to the Palestinians in the West Bank and Gaza. Program assistance for development is a key part of the United States' commitment to a negotiated two-state solution to promote peace in the Middle East, and program funding is primarily administered by USAID. Congress included a provision in the law for GAO to conduct an audit of all funds provided for programs in the West Bank and Gaza, including the extent to which programs comply with certain antiterrorism requirements. This report examines the extent to which (1) USAID has established antiterrorism policies and procedures for program assistance for the West Bank and Gaza and (2) USAID complied with requirements for vetting, antiterrorism certification, and mandatory provisions for program assistance for fiscal years 2012–2014. GAO reviewed antiterrorism laws, policies, procedures, and USAID documents that pertain to assistance programs and interviewed USAID and State officials. GAO also assessed a random generalizable sample of 158 awards to USAID's implementing partners using funds provided in fiscal years 2012–2014 from the Economic Support Fund account to determine the extent to which the awards were granted in compliance with antiterrorism policies and procedures. In 2006, the U.S. Agency for International Development (USAID) issued key antiterrorism policies and procedures—known as Mission Order 21 (the order)—to help ensure that program assistance for the West Bank and Gaza would not inadvertently provide support to entities or individuals associated with terrorism. The order, updated in 2007, outlines requirements and procedures for (1) vetting, or investigating a person or entity for links to terrorism; (2) obtaining an antiterrorism certification from awardees; and (3) including in awards two mandatory provisions that prohibit support for terrorism and restrict funding to facilities named after terrorists. In 2008, USAID West Bank and Gaza established a post-award compliance review process to identify weaknesses in compliance with applicable requirements in the order, which USAID works to resolve. This process is a key function that allows USAID to provide reasonable assurance that all prime awards and subawards are in compliance with the order. The compliance review process is described in notices issued by the mission from 2008 to 2012. For the purposes of this report, a prime awardee is an organization that directly receives USAID funding to implement projects, while a subawardee is an organization that receives funding from prime awardees. USAID's compliance reviews and GAO's examination of prime awards and subawards for fiscal years 2012-2014 found that USAID generally complied with requirements for vetting and inclusion of antiterrorism certification and mandatory provisions in awards. Regarding vetting, the compliance review reports—which covered more than 14,000 subawards—found, for example, one subawardee and 18 trainees for which no vetting was conducted. According to USAID, the subawardee and trainees were subsequently vetted and found eligible for program assistance. GAO's review of a random generalizable sample of 158 subawards found that 157 had applicable vetting conducted before the award. Regarding antiterrorism certification requirements, the compliance reviews identified one instance where a prime awardee failed to obtain an antiterrorism certification from a subawardee. GAO's review found that both prime awards and subawards were in compliance with antiterrorism certification requirements. Regarding mandatory provisions, the compliance reviews identified nine prime awardees that made a total of 449 subawards without including the two provisions. GAO's review found that 155 subawards (98 percent) had included the provisions in the award documentation. USAID required noncompliant awardees to provide antiterrorism certification and mandatory provisions for active awards, according to USAID. GAO is not making any recommendations in this report.
Our prior work has found that DOD’s approach to managing service acquisition has tended to be reactive and has not fully addressed key factors for success at either the strategic or transactional level. The strategic level is where the enterprise sets the direction or vision for what it needs, captures knowledge to enable more informed management decisions, ensures enterprisewide goals and objectives are achieved, determines how to go about meeting those needs, and assesses the resources it has to achieve desired outcomes. The strategic level also sets the context for the transactional level, where the focus is on making sound decisions on individual acquisitions. Congress has required USD(AT&L) to take a number of steps to improve service acquisition. Specifically in 10 U.S.C. § 2330, enacted in 2001 and amended in 2006, Congress required USD(AT&L) and the military departments to establish a management structure for the acquisition of services. Since 2003, we have evaluated DOD’s implementation of 10 U.S.C. § 2330 and efforts to establish the management structure and service acquisition approval process twice. First, in September 2003, we concluded that DOD’s approach to managing service acquisition did not provide a departmentwide assessment of how spending for services could be more effective. We therefore recommended that DOD give greater attention to promoting a strategic orientation by setting performance goals for improvements and ensuring accountability for results. DOD concurred in principle with our recommendation and agreed that additional actions could strengthen the management structure and acquisition approval process but also identified challenges for doing so based on its organizational size, complexity, and the acquisition environment. Subsequently, in November 2006, we found continued weaknesses associated with DOD’s management of service acquisitions at the strategic and transactional level. Specifically, we found that DOD’s approach to managing service acquisition tended to be reactive and that the department had not developed a means for evaluating whether ongoing and planned efforts were achieving intended results. DOD had not developed a strategic vision and lacked sustained commitment to managing service acquisition risks and fostering more efficient outcomes. DOD also had not developed metrics to assess whether any changes to improve service acquisition actually achieved the expected outcomes. As a result, DOD was not in a position to determine whether investments in services were achieving their desired outcomes. Moreover, the results of individual acquisitions were generally not used to inform or adjust the strategic direction. We recommended that, among other actions, DOD take steps to understand how and where service acquisition dollars are currently and will be spent, in part, to assist in adopting a proactive approach to managing service acquisition. We also recommended that DOD take steps to provide a capability to determine whether service acquisitions are meeting cost, schedule, and performance objectives. At that time, DOD concurred with our recommendations. USD(AT&L), however, acknowledged in 2010 that DOD still needed a cohesive, integrated strategy for acquiring services. DOD contract management has remained on our High Risk List, in part, because DOD has not developed such a strategy and continues to lack reliable services spending data to inform decision making. While Congress has required USD(AT&L) to take steps to improve service acquisition, USD(AT&L) has taken actions on its own initiative as well. For example, USD(AT&L) established its Better Buying Power Initiative in a September 2010 memorandum to provide guidance for obtaining greater efficiency and productivity in defense spending. In its memorandum, USD(AT&L) emphasized that DOD must prepare to continue supporting the warfighter through the acquisition of products and services in potentially fiscally constrained times. In its own words, USD(AT&L) noted that DOD must “do more without more.” USD(AT&L) organized the Better Buying Power Initiative around five major areas, including an area focused on improving tradecraft in service acquisition. This area identified actions to improve service acquisition, such as categorizing acquisitions by portfolio groups and assigning new managers to coordinate these groups. USD(AT&L) issued another memorandum in April 2013 to update the Better Buying Power Initiative. This memorandum identifies seven areas USD(AT&L) is pursuing to increase efficiency and productivity in defense spending. One area is to improve service acquisition and the memorandum identifies a number of related actions, such as increasing small business participation in service acquisitions and improving how DOD conducts services-related market research. Over the last decade, DOD has taken actions to address legislative requirements to improve the acquisition and management of services. Senior officials we spoke with across the military departments credit USD(AT&L)’s leadership and commitment as the driving force behind many of the actions taken to improve service acquisition. A number of these actions were intended to strengthen DOD’s management structure and approach to reviewing service acquisitions, as required by 10 U.S.C. § 2330. For example, both USD(AT&L) and the military departments established new senior management positions to improve oversight and coordination of service acquisition. With this management structure and review process in place, USD(AT&L) is focusing on efforts to improve the process for how requirements for individual service acquisitions are developed and training to respond to legislative direction. USD(AT&L) also created a senior-level team to identify and determine the training needs for DOD personnel responsible for developing service acquisition requirements. USD(AT&L) did not develop a specific implementation plan as required by section 807, but officials identified a number of actions that they regard as addressing the eight elements specified. Since 2002, DOD has increased its management attention on high dollar value service acquisitions by instituting new policies and review processes. In response to the initial requirements to establish a management structure for the acquisition of services, USD(AT&L) issued a guidance memorandum in May 2002. This memorandum required that service acquisitions be reviewed and approved based on dollar thresholds and that the acquisition strategy—addressing things such as the requirements to be satisfied and any potential risks—be approved prior to initiating any action to commit the government to the strategy. Under this policy, USD(AT&L) was responsible for reviewing and approving all proposed service acquisitions with an estimated value of $2 billion or more. Following the 2006 amendment to 10 U.S.C. § 2330, USD(AT&L) issued a revised memorandum in October of that year. Under the revised policy, which remains in effect, USD(AT&L) lowered the threshold for its review to service acquisitions valued at over $1 billion. The military departments have developed internal policies for reviewing and approving service acquisitions below USD(AT&L)’s threshold. Further, USD(AT&L) required that acquisition strategies be reviewed before contract award and that these and other acquisition planning documents include a top-level discussion of the source selection process as well as noting any waivers and deviations. USD(AT&L) and military department officials informed us that while these reviews are conducted, they have not tracked the total number of service acquisitions reviewed to date. In 2008, USD(AT&L) incorporated these requirements into DOD Instruction 5000.02, which is part of DOD’s overarching policy governing the operation of the defense acquisition system. This instruction currently requires that senior officials across DOD consider a number of factors when reviewing a service acquisition, including the source of the requirement, the previous approach to satisfying the requirement, the total cost of the acquisition, the competition strategy, and the source selection planning. USD(AT&L) expects to issue a stand-alone instruction in 2014 for service acquisition policy to replace Enclosure 9 of DOD Instruction 5000.02. Additionally, in a February 2009 memorandum, USD(AT&L) refined its guidance on conducting service acquisition strategy reviews. Specifically, USD(AT&L)’s memorandum identified criteria that service acquisitions must adhere to and that reviewers are to assess, such as use of appropriate contract type, maximization of competition, and inclusion of objective criteria to measure contractor performance. DOD also established new senior-level management positions, in part, to address legislative requirements, although some roles and responsibilities are still being defined. For example, the 2006 amendment to 10 U.S.C. § 2330 required that USD(AT&L) and the military departments establish commodity managers to coordinate procurement of key categories of services. In 2010 and 2012, USD(AT&L) revised how it organized its contracted services under nine key categories. These categories of services, referred to as portfolio groups, are (1) research and development, (2) knowledge based, (3) logistics management, (4) electronic and communication, (5) equipment related, (6) medical, (7) facility related, (8) construction, and (9) transportation. In 2011, the military departments began establishing commodity manager positions to improve coordination and assist requiring activities with their procurement of services within these portfolio groups. By July 1, 2013, USD(AT&L) expects to establish similar positions responsible for supporting the DOD- wide procurement of services, but their authorities and responsibilities are not yet fully defined. Additionally, as part of its Better Buying Power Initiative, USD(AT&L) assigned the Principal Deputy Under Secretary of Defense for Acquisition, Technology, and Logistics as DOD’s senior manager for service acquisition, responsible for policy, training, and oversight across DOD. Table 1 summarizes the established positions and accompanying responsibilities in descending order of their hierarchy within DOD. While these positions have a role in reviewing, approving, or coordinating individual service acquisitions, senior USD(AT&L) and military department officials explained that they do not have responsibility or authority for making departmentwide decisions, such as determining current or future resources allocated to contracted services. These officials explained that the military departments’ commands and requiring activities are responsible for determining their requirements and how best to meet them, as well as requesting and allocating budgetary resources. For example, while USD(AT&L) officials and the military department senior services managers are responsible for reviewing service acquisitions to determine whether the planned acquisition strategy clearly defines the military department’s requirement, they do not determine what contracted services are needed or whether an alternative acquisition approach could better meet their need. USD(AT&L) officials and the military department senior services managers stated they do not have insight into each requiring activity’s specific needs and are not positioned to validate those needs. For additional details on the actions that USD(AT&L) and the military departments have taken to address the specific requirements of 10 U.S.C. § 2330, see appendix I. USD(AT&L) has planned and implemented actions to improve DOD’s process for developing requirements for individual service acquisitions, as required by the 2006 amendment to 10 U.S.C. § 2330. USD(AT&L) officials noted that it has collaborated with DAU officials to develop new tools and training to help DOD personnel develop better acquisitions. For example, USD(AT&L) collaborated with DAU to create the Acquisition Requirements Roadmap Tool (ARRT) in 2012. The ARRT is an online resource designed to help personnel write performance-based requirements and create several pre-award documents, including performance work statements and quality assurance surveillance plans. The ARRT guides users through a series of questions to develop the pre-award documents using a standardized template tailored to the specific requirement for services. Although using the ARRT is not required across DOD, DAU officials told us they have integrated its use into other DAU training, such as the Performance Requirements for Service Acquisitions course. DAU officials did not have data on the effectiveness of the ARRT but noted that feedback has been positive. For example, they have heard that performance work statements are better reflecting requirements as a result of personnel using the tool. In 2009, DAU introduced its Services Acquisition Workshop (SAW) to provide training and guidance on developing service acquisition requirements. The SAW is a 4-day workshop tailored to proposed service acquisitions. Upon request from commands or requiring activities, DAU officials travel to the requestor and convene the multifunctional team responsible for an acquisition, including general counsel, individuals associated with the acquisition requirements, contracting personnel, and oversight personnel. This team is then to develop the language that will be used to articulate the service requirement using the ARRT. By the end of the 4 days, the command is to have drafts of its performance work statement, quality assurance surveillance plan, and performance requirement summary. A key aspect of the workshop DAU officials identified is that it brings together the key personnel responsible for the acquisition to discuss the service requirements and how they will know if a contractor has met those requirements. From fiscal years 2009 through 2012, DAU conducted 78 SAWs. In 2012, USD(AT&L) mandated use of the SAW for service acquisitions valued at $1 billion and above and is encouraging its use for acquisitions valued at $100 million or more. USD(AT&L) has directed the Director of Defense Procurement and Acquisition Policy (DPAP) and the senior services managers to assess the effectiveness of the SAW and develop lessons learned and best practices by October 1, 2013. In addition to implementing the ARRT and the SAW, USD(AT&L) established the Acquisition of Services Functional Integrated Product Team (Services FIPT) in August 2012, in part, to address training requirements in 10 U.S.C. § 2330. According to its charter, the Services FIPT is comprised of the Director of DPAP, DAU officials, and other officials responsible for acquisition career management within the DOD. The Services FIPT is to provide input toward the development and dissemination of training products and practical tools to assist personnel responsible for acquiring services. In addition, the Services FIPT is to explore the feasibility of certification standards and career development for all personnel who acquire services, including personnel within and outside of the defense acquisition workforce. USD(AT&L) officials explained that non-acquisition personnel are most often involved in the requirements development portion of the acquisition process but may not be trained on how DOD buys services. In 2011, we found that non- acquisition personnel with acquisition-related responsibilities represented more than half of the 430 personnel involved in the 29 services contracts we reviewed. While we found that non-acquisition personnel received some acquisition training, this training was largely related to contract oversight as opposed to requirements development. According to its charter, one of the Services FIPT’s first tasks will be to identify DOD’s non-acquisition personnel involved in service acquisitions and determine how best to train them. The Services FIPT, however, has made little progress to date, and has met once since it was established. USD(AT&L) officials could not provide a time line for when the Services FIPT may fully address the training requirements in 10 U.S.C. § 2330. The officials explained that they expect the team to make more progress in 2013 when the Principal Deputy Under Secretary for Acquisition, Technology, and Logistics assumes leadership of the Services FIPT. Section 807 of the NDAA for Fiscal Year 2012 required USD(AT&L) to develop a plan by June 28, 2012, for implementing the recommendations of the DSB to include, to the extent USD(AT&L) deemed appropriate, the following eight elements: 1. incentives to services contractors for high performance at low cost, 2. communication between the government and the services contracting industry while developing requirements for services contracts, 3. guidance for defense acquisition personnel on the use of appropriate 4. formal certification and training requirements for services acquisition 5. recruiting and training of services acquisition personnel, 6. policies and guidance on career development for services acquisition 7. ensuring the military departments dedicate portfolio-specific 8. ensuring DOD conducts realistic exercises and training that account for services contracting during contingency operations. USD(AT&L) officials told us they did not develop a specific plan to address the section 807 requirement. They explained, however, that the April 2013 Better Buying Power Initiative memorandum addresses seven of the eight elements and that they have addressed the last element through a separate effort. In reviewing the April 2013 memorandum, we also found that it reflects actions to address all of the elements except the one pertaining to training and exercises during contingency operations. USD(AT&L) also identified 23 different actions it has taken or plans to take that officials regard as addressing all of the elements the plan was to include, some of which pre-date the April 2013 Better Buying Power Initiative memorandum. For example, In January 2012, USD(AT&L) issued guidance to improve how DOD communicates with the vendor community. In April 2013, USD(AT&L) directed that new guidance be developed to help acquisition personnel select the appropriate contract type and contractor performance incentives in DOD’s service acquisitions. DOD plans to conduct a joint mission rehearsal exercise in 2014 that will include training for services contracting during contingency operations. See appendix II for a more detailed description of the actions USD(AT&L) took to address the section 807 elements. While DOD has taken a number of actions to address legislative requirements, DOD is not yet positioned to determine what effects its actions have had on improving service acquisition. Specifically, USD(AT&L) has not yet fully addressed two key factors—a desired end state for the future with specific goals and associated metrics that would enable it to assess progress toward achieving those goals and determine whether service acquisition is improving. USD(AT&L) is challenged in addressing these key factors, in part, because it has limited insight into the current status of service acquisition in terms of the volume, type, location, and trends. While they have not established metrics to assess departmentwide progress, USD(AT&L) officials rely on reviews of individual service acquisitions, command level assessments, and feedback from the military departments as means to gauge whether DOD’s efforts are contributing to better service acquisitions. DOD has not established aggregated results or trends which could be used to provide a departmentwide perspective on the effects of its actions. USD(AT&L) and military department leadership have demonstrated a commitment to improving service acquisition, but USD(AT&L) officials stated that they have not defined the desired end state or specific goals its actions were intended to achieve. In our November 2006 report, we found, based on assessments of leading commercial firms, that identifying and communicating a defined end state or specific goals can significantly improve service acquisition. This work also found that being able to define a desired end state or what goals are to be achieved at a specified time necessitates knowledge of the current volume, type, location, and trends of service acquisitions. USD(AT&L) and the military department senior services managers acknowledge that they are challenged in defining the desired end state, in part, because limitations within DOD’s contracting and financial data systems hinder their insight into where service acquisition is today. USD(AT&L) and military department officials explained that DOD’s primary source of information on contracts, the Federal Procurement Data System-Next Generation (FPDS-NG), has a number of data limitations, including that it only reflects the predominant service purchased on a service does not reveal any services embedded in a contract for goods, does not fully identify the location of the requiring activity contracting for the service. Additionally, DOD’s financial systems do not provide detailed information on DOD’s budget and actual spending on specific types of contracted services and are not linked to the data maintained in FPDS-NG. According to USD(AT&L) officials and the senior services managers, collectively, the limitations of both FPDS-NG and DOD’s financial systems create challenges in identifying the current volume, type, location, and any potential trends in service acquisition. For example, USD(AT&L) stated that DOD wants to more strategically manage its nine portfolio groups of contracted services but does not have adequate insight into what services DOD currently buys within these portfolio groups. To improve insight into DOD’s contracted services, USD(AT&L) is linking DOD’s contract and financial data systems and increasing the level of detail these systems provide. For example, DOD is updating its financial systems to provide data on each service purchased under a contract. USD(AT&L) officials stated that improving and linking data within its contract and financial systems will enable DOD to determine what it budgeted for a particular service, what it actually spent for that service, and which organizations bought the service. Officials, however, do not expect to have this capability until at least 2014. USD(AT&L) officials noted that this effort could help provide better insight into future budget requirements for services. USD(AT&L) officials also stated that they are exploring how to use Electronic Document Access—a DOD online document access system for acquisition related information—to provide them with better insight into the different types of services DOD buys under each of its contracts. USD(AT&L) identified that, collectively, these efforts will help them to improve the management of its nine portfolio groups of contracted services, thereby enabling the department to better leverage its buying power, provide insight into the marketplace and buying behaviors, and identify opportunities for cost savings. In its April 2013 Better Buying Power Initiative memorandum, USD(AT&L) also identified that by managing service acquisition by portfolio group, the senior services managers should be able to work with requiring activities to forecast future services requirements. While the military departments have taken some steps to forecast or track future contracted services requirements, these efforts are too new to determine their utility in identifying what services DOD plans to buy. For example, in 2012, the Army senior services manager requested that Army commands provide an estimate for contracted services valued over $10 million to be purchased over the next five fiscal years in an effort to identify any potential cost savings. Air Force officials also track information on service acquisitions that they expect will be awarded over the next three years to aid in planning acquisition strategy reviews. The Navy is developing its own approach to forecast future contracted services requirements, which officials stated will be implemented in 2013. While it is too early to assess the effects of these forecast or tracking efforts, they have the potential to help the military departments better understand what services will be purchased and facilitate DOD in identifying its desired end state for service acquisition. USD(AT&L) has not established departmentwide metrics to assess the effects of its actions to improve service acquisition. Our prior work found that metrics linked to specified outcomes are another key factor to (1) evaluating and understanding performance levels, (2) identifying critical processes that require attention, (3) documenting results over time, and (4) reporting information to senior officials for decision making purposes. In lieu of such metrics, USD(AT&L) and military department officials stated that they rely on results from reviews of individual service acquisitions, command level assessments, and feedback from the military departments to gauge whether the department’s actions to improve services acquisitions, such as those required by Congress or established under DOD’s Better Buying Power Initiative, are having a positive effect. USD(AT&L) officials have acknowledged the need to establish departmentwide metrics but explained that developing such metrics has proven challenging. They further indicated that metrics used by leading commercial companies, which often focus on reducing spending for services to improve a company’s financial position, may not be appropriate for DOD. USD(AT&L) officials noted that DOD’s budget is based on an assessment of its missions and the resources needed to achieve its objective. These officials noted that while DOD is continuously looking for ways to improve its efficiency, it is difficult to set goals and measure actual reductions in spending as any savings or cost avoidances will generally be invested in other unfunded or high priority activities. Further, USD(AT&L) officials noted that since DOD’s budget is appropriated by Congress rather than derived from the sale of goods and services, changes in its resources are often outside its direct control. While developing goals and metrics is challenging, it is not impossible. DOD has acknowledged the need to establish departmentwide metrics. For example, our recent work on strategic sourcing—a process that moves an organization away from numerous individual acquisitions to a broader, aggregate approach—found that federal agencies, including DOD, could expand the use of this approach. Strategic sourcing enables federal agencies to lower costs and maximize the value of services they buy, which is consistent with DOD’s Better Buying Power Initiative. We found that some agencies, including DOD, did not address the categories that represented their highest spending, the majority of which exceeded $1 billion and were for services. To improve its strategic sourcing efforts at DOD, we recommended, among other things, that DOD set goals for the amount of spending managed through strategically sourced acquisitions, link strategic sourcing to its Better Buying Power Initiative, and establish metrics, such as utilization rates, to track progress toward these goals. DOD concurred with the recommendations and stated it would establish goals and metrics by September 2013. In the absence of departmentwide metrics, USD(AT&L) officials and senior services managers identified several ongoing efforts they rely on to gauge the effects of their actions to improve service acquisition. For example, USD(AT&L) and the military departments conduct pre- and post-award independent management reviews, or peer reviews, to ensure individual service acquisitions are conducted in accordance with applicable laws, regulations, and policies. USD(AT&L) and military department officials stated that through these peer reviews, they can determine if individual service acquisitions have resulted in the intended outcomes. For example, during the post-award phase, reviewers are to assess whether cost, schedule, and performance measures associated with individual service acquisitions are being achieved. We have previously found, however, that cost or schedule performance measures may not be as effective for service acquisitions as they are for product or weapon system acquisitions. Further, while peer reviews provide DOD with insight into the performance of a single service acquisition, DOD does not have information on how many post-award peer reviews have been completed by the military departments and has not aggregated the results or identified trends from all of DOD’s peer reviews. Additionally, the Air Force and the Navy are conducting assessments at the command level to evaluate organizations that buy and manage service acquisitions. These assessments are intended to identify performance levels, needed improvements, and best practices. For example, the Air Force implemented health assessments to review a command’s timeliness of contract awards, creation and use of standardized templates, implementation of internal and external recommendations and new policy requirements, and quality of communication. According to officials, the Air Force first implemented its health assessments in approximately 2009 to rate or score each of its commands in a number of different performance areas, such as program management and fiscal responsibility. Air Force officials reported, however, that they have not established baselines or identified any quantifiable trends from these health assessments. That said, Air Force officials told us that these assessments have contributed to improvements in the service acquisition process. For example, in a 2011 health assessment, the Air Force found that one program office reduced the use of bridge contracts—a potentially undesirable contract that spans the time between an expiring contract and a new award—by 50 percent from fiscal year 2010 to 2011.2012. During this assessment, the Navy identified a requirements development tool created and used within a command that was potentially a best practice and is being considered for Navy-wide use. The Army’s senior services manager is in the process of determining how to assess the health of the Army’s service acquisition organizations and expects to implement an approach in 2013. The Navy completed its first health assessment in USD(AT&L) officials also plan to assess the health of service acquisition across the military departments, potentially down to the program office level, using a number of indicators of risks, referred to as tripwires. Tripwires are established thresholds for measurable risk or performance indicators related to the acquisition of goods or services that, when triggered, could result in further review. USD(AT&L) officials stated that tripwires are still under development but could include thresholds for the number of days FPDS-NG data was input past deadlines or the number of contract modifications within 30 days of contract award. USD(AT&L) officials explained that tripwires alone are not sufficient to assess service acquisition performance, but tripwires could provide insight into what may or may not be going well and provide trend data over time. Further, USD(AT&L) annually reviews the military departments and other DOD components to understand the effects of its actions and policies related to improving service acquisitions and solicit recommendations for changes. For example, in 2012, USD(AT&L) inquired about the actions that have been taken to comply with various defense acquisition regulations or policies, such as the Better Buying Power Initiative. The Army’s and Navy’s responses noted that actions to improve competition led to an 11 and 12 percent increase, respectively, in the rate of effective competition—situations where more than one offer is received in response to a competitive solicitation—for service contracts from fiscal year 2010 through 2012. In response to an open-ended question on recommendations for improvements, each military department suggested that USD(AT&L) take additional actions to increase departmentwide coordination on service acquisitions. Specifically, the Army and the Air Force recommended departmentwide service acquisition management meetings to coordinate on issues such as emerging regulations, directives, and policies to improve service acquisitions. In response, USD(AT&L) officials told us that the Director of DPAP meets with the military departments’ senior services managers regularly. DOD’s ongoing efforts to gauge the effects of their actions to improve service acquisition also offer opportunities for DOD to develop baseline data, establish goals, and identify departmentwide metrics to measure progress. For example, by analyzing and aggregating the results of its health assessments, each military department could establish baselines against which to assess individual commands and over time, identify trends to determine if its commands are improving how they acquire services. Similarly, in coordination with the military departments, USD(AT&L) could use its tripwire approach to determine what percent of DOD’s service acquisition strategies are not approved or require changes before approval. DOD could then use such information to help identify reasons for why certain service acquisitions are not approved and determine appropriate corrective actions. DOD could further develop metrics associated with actions outlined in the Better Buying Power Initiative. For example, using its established services portfolio groups, DOD could develop baseline data on the degree of effective competition for services within each group. Depending on the results of that analysis, DOD could determine whether it would be appropriate to establish effective competition goals and metrics for each portfolio group or specific types of services within each group. In light of the billions of dollars DOD spends each year on services and the constrained fiscal environment, it is critical for DOD to identify how it can best utilize its financial resources and acquire services more efficiently and effectively. DOD leadership has demonstrated a commitment to improving service acquisition and management and has taken a number of actions to address legislative requirements. For example, USD(AT&L) and the military departments have focused more management attention on improving service acquisitions through new policies and guidance, reviews of high-dollar service acquisitions, and new tools and training for personnel who acquire services. Further, DOD recently designated the Principal Deputy Under Secretary of Defense for Acquisition, Technology, and Logistics as the department’s senior manager for service acquisition and has established similar positions, including senior services managers, within each of the military departments. In some cases, however, DOD remains in the process of defining the duties and responsibilities of these positions. When taken collectively, DOD has taken action to address the requirements of 10 U.S.C. § 2330 and section 807 of the NDAA for Fiscal Year 2012. DOD, however, does not know whether or how these actions, individually or collectively, have resulted in improvements to service acquisition. This is due, in part, to the fact that DOD continues to have limited knowledge and baseline data on the current state of service acquisition. To address this shortfall, DOD expects to obtain better service acquisition data by improving and linking data within its contract and financial systems, but this effort will not be complete until at least 2014. Having baseline budget and spending data can provide a foundation for measuring progress, but other factors such as articulating its desired end state and developing specific and measurable goals are also important for assessing progress. While developing specific goals and departmentwide metrics is challenging, it is not impossible. For example, DOD concurred with the need to set goals for the amount of spending managed through strategically sourced acquisitions, link strategic sourcing to its Better Buying Power Initiative, and establish metrics, such as utilization rates, to track progress toward these goals. However, DOD is currently missing opportunities to fully leverage its command-level assessments, feedback from the military departments, and other ongoing efforts it relies on to gauge the effects of its actions to improve service acquisition. Each of these efforts has merit and value in their own regard. Nevertheless, until DOD utilizes them to develop baseline data, goals, and associated metrics, similar to what it has committed to do for its strategic sourcing efforts, DOD will continue to be in a position where it does not know whether its actions are sufficient to achieve desired outcomes. To better position DOD to determine whether its actions have improved service acquisition, we recommend that the Principal Deputy Under Secretary of Defense for Acquisition, Technology, and Logistics, in consultation with the military departments’ senior services managers, take the following three actions: identify baseline data on the status of service acquisition, in part, by using budget and spending data and leveraging its ongoing efforts to gauge the effects of its actions to improve service acquisition, develop specific goals associated with their actions to improve establish metrics to assess progress in meeting these goals. DOD provided us with written comments on a draft on this report, which are reprinted in appendix III. DOD concurred with the three recommendations, noting that they are consistent with DOD’s ongoing Better Buying Power Initiative. DOD also stated that as it improves its management of service acquisition, it should be able to measure performance, track productivity trends, and establish consistent best practices across the department. We agree that DOD has the opportunity to leverage its ongoing efforts as it works to implement our recommendations. By incorporating our recommendations into those efforts, DOD will be better positioned to determine whether its actions are improving service acquisition. DOD also provided technical comments, which were incorporated as appropriate. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, Air Force, and the Navy; the Principal Deputy Under Secretary of Defense for Acquisition, Technology, and Logistics; and interested congressional committees. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix IV. In 2001, Congress required the Secretary of Defense to implement a management structure for the acquisition of services under section 2330, title 10, United States Code (U.S.C.). This provision requires, among other things, the Department of Defense (DOD) to develop a process for approving individual service acquisitions based on dollar thresholds and other criteria to ensure that DOD acquires services by means that are in the government’s best interest and managed in compliance with applicable statutory requirements. Under DOD’s initial May 2002 guidance for implementing the required management structure and service acquisition approval process, the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)) was to review all proposed service acquisitions with an estimated value of $2 billion or more. The military departments and other defense components were to review service acquisitions below that threshold. The military departments each subsequently developed their own service acquisition approval processes that had several elements in common. Chief among these elements was the requirement that acquisition strategies be reviewed and approved by senior officials before contracts are awarded. Acquisition strategies to be reviewed were to include, among other things, information on contract requirements, anticipated risks, and business arrangements. Once acquisition strategies were approved, DOD contracting offices may continue the acquisition process, including soliciting bids for proposed work and awarding contracts. In January 2006, Congress amended 10 U.S.C. § 2330 to include additional requirements for DOD’s management of the acquisition of services. The amendment requires, among other things, that the senior officials responsible for management of acquisition of contract services assign responsibility for the review and approval of procurements based on estimated value of the acquisition. Senior officials within DOD are identified as USD(AT&L) and the service acquisition executives of the military departments. In response to these requirements, USD(AT&L) issued an October 2006 memorandum to update its 2002 acquisition of services policy. The revised policy identifies categories of service acquisitions, based on dollar thresholds and related roles and responsibilities within USD(AT&L) and the military departments. The policy requires all proposed service acquisitions with a value estimated at more than $1 billion be referred to USD(AT&L) and formally reviewed at the discretion of USD(AT&L). Acquisitions with a value estimated under that threshold are subject to military department acquisition approval reviews. USD(AT&L)’s 2006 acquisition of services policy was incorporated into Enclosure 9 of DOD’s 5000.02 acquisition instruction. In 2010, USD(AT&L) required that each of the military departments establish senior managers to be responsible for the governance in planning, execution, strategic sourcing, and management of service contracts. Additionally, these senior managers are to review service acquisitions valued at $10 million but less than $250 million. USD(AT&L) expects to issue a stand-alone instruction in 2014 for service acquisition policy to replace Enclosure 9 of DOD Instruction 5000.02. See table 2 for a summary of service acquisition review thresholds and approval authorities. The 2006 amendments to 10 U.S.C. § 2330 require DOD to take a number of other actions. For example, DOD is to develop service acquisition policies, guidance, and best practices; appoint full-time commodity managers for key categories of services; and ensure competitive procedures and performance-based contracting be used to the maximum extent practicable. In table 3, we summarize the actions that DOD took in response to the requirements in 10 U.S.C. § 2330. To do so, we collected USD(AT&L) and each military department’s self- reported information using a data collection template; corroborated reported actions with related documentation when available; and conducted interviews with knowledgeable agency officials to clarify responses. We did not evaluate the appropriateness or sufficiency of any actions taken or planned by DOD. Section 802 of the National Defense Authorization Act (NDAA) for Fiscal Year 2010 required the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)) to direct the Defense Science Board (DSB) to independently assess improvements to the Department of Defense’s (DOD) acquisition and oversight of services. The resulting March 2011 DSB report, “Improvements to Services Contracting,” contained 20 recommendations aimed at improving DOD’s contracting for services. These recommendations focused on developing new policies and processes to strengthen management and oversight of services contracting, designating roles and leadership responsibilities, and strengthening the skills and capabilities of personnel involved in services contracting, including those in contingency environments. Subsequently, section 807 of the NDAA for Fiscal Year 2012 required USD(AT&L) to develop a plan, by June 28, 2012, to implement the DSB recommendations. The plan was to address, to the extent USD(AT&L) deemed appropriate, eight different elements most of which align with the DSB recommendations. USD(AT&L) officials told us they did not develop a specific plan to address the section 807 requirement, but that the April 2013 Better Buying Power Initiative memorandum addresses seven of the eight elements. In reviewing the memorandum, we also found that it reflects actions to address all of the elements except the one pertaining to training and exercises during contingency operations. USD(AT&L) also identified 23 different actions it has taken or plans to take that officials regard as addressing all of the elements the plan was to include, a number which pre-date the April 2013 Better Buying Power Initiative memorandum. Table 4 provides a summary of the actions USD(AT&L) reported as addressing each of the eight section 807 elements. To determine if USD(AT&L) has taken or planned actions to address the elements in section 807, we collected USD(AT&L)’s self-reported information using a data collection template, corroborated reported actions with related documentation when available, and conducted interviews with knowledgeable USD(AT&L), military department, and Defense Acquisition University officials to clarify responses. We did not evaluate the appropriateness or sufficiency of any actions taken or planned by USD(AT&L). In addition to the contact name above, the following staff members made key contributions to this report: Johana R. Ayers; Helena Brink, Burns Chamberlain Eckert, Danielle Greene, Kristine Hassinger; Justin Jaynes; and Roxanna Sun.
In fiscal year 2012, DOD obligated more than $186 billion for contracted services, making it the federal government’s largest buyer of services. GAO’s prior work found that DOD’s use of contracted services has been the result of thousands of individual decisions, not strategic planning across the department. Over the years, Congress has legislated a number of requirements to improve DOD’s service acquisitions. For example, Congress required DOD to implement a service acquisition management structure, approval process, and policies. Congress also directed DOD to develop a plan to implement the Defense Science Board’s recommendations for improving service acquisition. The National Defense Authorization Act for Fiscal Year 2012 mandated that GAO report on DOD’s actions to improve service acquisition and management. GAO examined (1) the actions DOD has taken to respond to legislative requirements and (2) how DOD determines the effects of its actions to improve service acquisition. GAO reviewed documentation and interviewed DOD officials on the actions taken in response to the legislative requirements. GAO also assessed whether DOD addressed key factors, including establishing goals and metrics, to help it determine if it has improved service acquisition. Over the last decade, the Department of Defense (DOD) has taken several actions to address legislative requirements to improve the acquisition and management of services. In 2001, as amended in 2006, Congress required DOD to implement a management structure for the acquisition of services. In response, DOD implemented such a structure and service acquisition review and approval process. Recently, DOD also established new positions within its management structure, including senior managers within the office of the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)) and the military departments, to oversee and coordinate service acquisition. With a management structure and review process in place, USD(AT&L) is focusing on efforts to improve the process for how requirements for individual service acquisitions are developed and enhancing training to respond to several legislative directives. USD(AT&L) also created its Acquisition of Services Functional Integrated Product Team, in part, to determine how to address legislative requirements to provide training for personnel acquiring services. USD(AT&L) did not develop a plan to implement the Defense Science Board recommendations to improve service acquisition but identified 23 different actions, including its Better Buying Power Initiative, it has planned or taken that officials regard as addressing what the plan was to include. For example, USD(AT&L) is updating its guidance on using incentives to improve contractor performance, which addresses one of the elements that was to be in the plan. While DOD has taken a number of actions that address legislative requirements, DOD is not yet positioned to determine what effects these actions have had on improving service acquisition. Specifically, USD(AT&L) has not identified specific goals and associated metrics that would enable it to assess progress toward achieving those goals. USD(AT&L) has identified improving service acquisition as a priority but has not defined a desired end state for its actions or the measurable characteristics that would embody achieving such a goal. It is challenged in defining a desired end state for its actions, in part, because it has not determined the current status of service acquisition in terms of the volume, type, location, and trends. DOD is taking steps to improve its contract and financial systems to obtain such data, but these efforts will not be complete until at least 2014. Further, DOD has not established departmentwide metrics to assess its progress in improving service acquisition but has acknowledged the need to do so, which officials described as challenging. Nevertheless, despite the challenges in doing so, it is not impossible. For example, DOD has agreed to set goals for the amount of spending managed through strategically sourced acquisitions, link strategic sourcing to its Better Buying Power Initiative, and establish metrics, such as utilization rates, to track progress toward these goals. However, DOD is not fully leveraging the command-level assessments, feedback from the military departments, and other ongoing efforts it relies on to gauge the effects of its actions to improve service acquisition. By using its budget and spending data and leveraging these efforts, DOD could develop baseline data and identify trends over time, enabling it to develop measurable goals and gain more insight into whether its actions are improving service acquisition. Until then, DOD will continue to be in a position where it does not know whether its actions are sufficient to achieve desired outcomes. GAO recommends that DOD establish baseline data, specific goals for improving service acquisition, and associated metrics to assess its progress. DOD concurred with the three recommendations.
Debtors who file personal bankruptcy petitions usually file under chapters 7 or 13 of the bankruptcy code. Generally, debtors who file under chapter 7 of the bankruptcy code seek a discharge of all their eligible dischargeable debts. Debtors who file under chapter 13 submit a repayment plan, which must be confirmed by the bankruptcy court, for paying all or a portion of their debts over a 3-year period unless for cause the court approves a period not to exceed 5 years. The Center report was based on data from 3,798 personal bankruptcy petitions filed principally in May and June 1996 in 13 of the more than 180 bankruptcy court locations. The petitions included 2,441 chapter 7 and 1,357 chapter 13 petitions. The researchers collected a wide variety of information about debtors’ income, expenditures, and debts from the schedules the debtors filed with their bankruptcy petitions. Because the debtors’ schedules used in the report must be obtained from the case files at each court location, obtaining the data used for the Center report represented a considerable investment of Center time and money. The data are not available from the automated databases maintained by the federal judiciary or the Executive Office of U.S. Trustees, the two principal sources of automated data on bankruptcy cases. On the basis of the Center report’s assumptions and the formula used to determine income available for repayment of nonpriority, nonhousing debt, the report estimated that about 50 percent of the chapter 13 debtors in the 13 locations combined would have sufficient income, after living expenses, to repay all of their nonpriority, nonhousing debt over a 5-year period; and an additional 19 percent could pay 60 percent or more over the same period. The report estimated that 5 percent of the chapter 7 debtors in the 13 locations combined could repay all of their nonpriority, nonhousing debt over 5 years; 10 percent could repay at least 78 percent, and 25 percent could repay at least 30 percent. The Center report also estimated that about 11 percent of chapter 13 debtors and about 56 percent of chapter 7 debtors were expected to have no income available to repay nonhousing debts. The Center report’s analysis was based on data from the initial schedules of current estimated monthly income, current estimated average monthly expenditures, and debts that debtors submitted at the time they filed for bankruptcy. There are two reasons to question whether broad conclusions about debtors’ ability to pay nonhousing debt can be made on the basis of the debtors’ statements of estimated income and estimated expenses at the time of filing for bankruptcy: The accuracy of the data in the debtors’ initial schedules is unknown, and no empirical study has been done to assess their accuracy. Moreover, debtors may generally amend these schedules as a matter of course at any time prior to final disposition of the debtors’ bankruptcy cases. The Center report assumed that debtors’ income and living expenses, as reported in those schedules, could be used to satisfactorily forecast debtors’ income and living expenses for a 5 year debt repayment period. However, the report did not include empirical evidence to support this assumption. There is some empirical evidence that this assumption may not be appropriate, at least for a portion of debtors who file for bankruptcy. The Center report relied on debtors’ self-reported data on current estimated income, current estimated expenditures, and debts at the time of filing and assumed that these data were accurate. Although the data in the various schedules are the only such information available at the time a debtor files for bankruptcy, the National Bankruptcy Review Commission report noted that “no study has yet been done to test the accuracy of the data as initially reported by debtors,” and it recommended random audits of debtors’ initial schedules. The effect of any inaccuracies in these schedules could be that the debtor’s actual net income is overstated or understated. The schedules that debtors complete on their current average monthly income and current average monthly expenditures indicate that debtors should estimate their income and expenditures. The data that debtors report in these schedules represent a snapshot in time, and debtors may generally amend their schedules at any time prior to final disposition of their bankruptcy cases. Such amendments were not included in the Center’s analysis. Amendments may be made for a variety of reasons, but there are no readily available empirical data on how frequently schedules are actually amended and the effect of such amendments on the income, expenditures, and debts that debtors report on their initial schedules. The Center report’s analysis assumed that the debtor’s income and expenses, as reported on the schedules filed with the bankruptcy petition, could be used to satisfactorily forecast his or her income and expenses during the course of a 5 year debt repayment period. In other words, the Center report assumed that a debtor’s reported income and expenses would remain uninterrupted and unchanged over the 5 years. This assumption is critical to the report’s estimate of the percentage of nonhousing debt that debtors could repay over 5 years. However, the Center report provided no empirical support for this assumption. A couple of factors raise questions about the validity of this assumption. First, the Center report provided evidence of instability in debtor income in the year preceding the debtor’s bankruptcy filing. About 77 percent of the 2,441 chapter 7 debtors and about 85 percent of the 1,357 chapter 13 debtors in the Center’s analysis reported having some wage income at the time of filing. However, about 68 percent of the chapter 7 debtors and about 50 percent of the chapter 13 debtors in the Center’s report also reported they had experienced a reduction in income during the 12 months prior to filing bankruptcy. As the Center report noted, it is not surprising that those who file for bankruptcy have suffered a loss of income prior to filing. Second, there is also some evidence that debtors may experience fluctuating income or expenses in the 5 years following the filing of their bankruptcy petitions. The findings of a 1994 report by AOUSC suggest that at least a portion of debtors could be expected to experience deterioration in their financial circumstances during the 5 years after filing for bankruptcy. AOUSC reviewed the outcomes of 953,180 chapter 13 cases filed between calendar years 1980 and 1988 and terminated by September 30, 1993. AOUSC found that debtors received a discharge in only about 36 percent of all chapter 13 cases terminated. A chapter 13 discharge is generally granted when a debtor successfully completes a court-approved repayment plan. A hardship discharge may be granted to chapter 13 debtors who fail to complete the plan payments due to circumstances for which the debtor should not justly be held accountable. AOUSC found that in about 14 percent of all chapter 13 cases terminated, the debtors were unable to maintain their payments; prior to termination, their cases were converted to chapter 7 liquidation, in which all eligible debts were discharged. The typical case that converted to chapter 7 did so about 2 years after the case was filed. AOUSC also found that about 49 percent of all chapter 13 cases terminated were dismissed, but data were not available on the reasons for the dismissals. The results of the AOUSC report caution against making broad conclusions about debtors’ ability to maintain debt payments over a 5-year period based on the data in the initial schedules alone. There is some evidence in the Center report that the intent of the analysis was to estimate debtors’ ability to pay their eligible dischargeable nonhousing debts—secured and unsecured. However, this is not explicitly stated, and the Center report did not clearly define the universe of nonhousing debts for which it estimated debtors’ ability to pay. The Center report defined the net income that debtors had available to pay nonhousing debts as the debtor’s net annual take-home pay less (1) living expenses (as defined in the report) and (2) payments toward “unsecured priority debt.” As examples of such debts, the report mentioned back taxes and past-due child support. These are examples of debts that are generally nondischargeable in bankruptcy proceedings. Debtors report unsecured priority debts on Schedule E. However, the debts to be listed in Schedule E can, in some cases, include both debts that are dischargeable and debts that are generally nondischargeable. Moreover, not all nondischargeable debts can be found in Schedule E.For example, certain student loans, debts arising out of drunk driving, criminal restitution, and criminal court fines are not dischargeable in bankruptcy, but such obligations would be appropriately listed as “unsecured nonpriority debt” on Schedule F. The Center report included student loans in unsecured nonpriority debt. However, it is not clear if these student loans represented only loans that were eligible for discharge. Thus, the Center report may not have identified all generally nondischargeable debts for which the debtor would still be responsible following the close of his or her bankruptcy case. To the extent that the Center report understated nondischargeable debts, it would have overstated the net income that debtors would have available to pay dischargeable nonhousing debts. Conversely, to the extent that the report overstated nondischargeable debts, it would have understated debtors’ net income available to pay dischargeable debts. In addition, to the extent that the report assumed that dischargeable unsecured priority debts would be paid, it would have created a disparity in the report’s treatment of dischargeable nonhousing debts. A portion of personal bankruptcy debtors voluntarily agree to reaffirm, or repay, some of their dischargeable debts by entering into a reaffirmation agreement to remain personally liable for reaffirmed debts. According to the Executive Office of the U.S. Trustees, debtors tend to reaffirm secured debt, such as a home mortgage or car loan. By reaffirming these debts and keeping current on the payments, the debtors retain possession of the property secured by the debt. To the extent that debtors maintain their payments on reaffirmed debt, it would reduce the amount of income debtors have to pay eligible dischargeable debts that were not reaffirmed. The Center report included in debtors’ living expenses the full value of any home mortgage payments the debtors listed in Schedule J. To the extent that the listed home mortgage payments actually represent the full payments required for home mortgage debt, the Center report assumed that debtors had reaffirmed their housing debt. However, the Center report did not deduct from debtors’ income the value of the payments required to pay the nonhousing debts that debtors stated it was their intention to reaffirm. Data provided by the authors of the Center report showed that for 12 of the 13 locations in the report (Dallas reaffirmation data were incomplete), secured nonhousing debt accounted for virtually all the average nonhousing debt that debtors intended to reaffirm. The average percent of total unsecured debt that debtors indicated they intended to reaffirm did not exceed 1 percent in any of the 12 locations. The effect of deducting from chapter 7 debtors’ income the payments required to repay reaffirmed secured nonhousing debts would be expected to vary across the 13 locations in the Center report because of the wide variation in intended reaffirmations by location. Using data provided by the authors of the Center report, we show in table 1 the percentage of chapter 7 debtors in each of the report’s 13 locations who stated their intent to reaffirm at least some of their secured nonhousing debts, the average percent of total secured nonhousing debt to be reaffirmed, and the average total dollar amount of secured nonhousing debts to be reaffirmed. The percentage of chapter 7 debtors who, according to the Center’s data, stated their intent to reaffirm at least some of their secured nonhousing debt ranged from about 23 percent in Los Angeles to about 73 percent in Indianapolis. The data also showed considerable differences for those locations within the same state. About 23 percent of chapter 7 debtors in Los Angeles reported their intent to reaffirm at least some secured nonhousing debt compared to about 42 percent in San Diego. The average percentage of secured nonhousing debt that chapter 7 debtors stated they intended to reaffirm ranged from about 23 percent in Los Angeles to about 61 percent in Memphis. The average amount of total debt to be reaffirmed ranged from about $1,362 per debtor in Los Angeles to $6,706 per debtor in Memphis. The averages for any specific location may be based on wide variation in the amount of debt that individual debtors stated it was their intent to reaffirm. The Center report presented data that combined results from all 13 locations on debtors’ available income to pay nonhousing debt. Because the Center report focused on the results from all 13 locations combined, it included little discussion of the considerable variations among the 13 locations used in the study. As previously discussed, Center data not included in the report showed a wide variation across the 12 locations with complete data for chapter 7 debtors’ intended reaffirmations of secured nonhousing debt. Specifically, as shown in table 1, the percentage of chapter 7 debtors reaffirming at least some secured nonhousing debt ranged from about 23 percent to 73 percent, and the average amount of total debt to be reaffirmed ranged from about $1,362 to $6,706. Data provided by the report’s authors showed that the percentage of chapter 7 debtors with at least some income available to pay nonpriority, nonhousing debt ranged from about 32 percent in San Diego to about 67 percent in Dallas. Other studies have also concluded that there is considerable variation among bankruptcy districts. The National Bankruptcy Review Commission found, for example, that chapter 13 practices “differ dramatically from state to state, district to district, and even from judge to judge in the same district.” The Commission report noted that divergent local interpretations of the chapter 13 system create a situation in which expert legal advice is necessary to develop, confirm, modify, and complete a chapter 13 plan; and debtors in very similar circumstances encounter extremely different chapter 13 systems across the nation. The AOUSC report on chapter 13 cases discussed earlier found considerable variation in case results among all bankruptcy districts and among the 13 districts included in the Center report. As shown in table 2, the percentage of terminated chapter 13 cases that resulted in the discharge of a successful repayment plan ranged from about 15 percent in Central California (which includes Los Angeles) to about 40 percent in Western Missouri (which includes Kansas City). The percentage of chapter 13 cases that were converted to chapter 7 liquidation prior to termination ranged from about 8 percent in Western Tennessee (which includes Memphis) to about 43 percent in Western Pennsylvania (which includes Pittsburgh). These variations among bankruptcy districts—for percentage of debtors with at least some income to pay debts, for reaffirmations, and for the final disposition of chapter 13 cases—suggest that one should be cautious in generalizing about debtors across all 13 locations in the Center’s report. The Center’s researchers selected the 13 bankruptcy locations and 3,798 personal bankruptcy petitions without using scientific random sampling techniques. As a result, the national estimates presented in the report’s conclusions were not based on representative probability sampling methods. In addition, standard statistical methods cannot be used to evaluate the likely accuracy of the Center report’s results. Consequently, the methods used in the Center’s analysis do not provide a sound basis for generalizing the Center report’s findings to the annual 1996 filings in each of the 13 locations nor to the national population of personal bankruptcy filings. The 13 court locations used in the report were judgmentally selected from large urban areas with a Credit Counseling Center and large bankruptcy caseloads. The locations were also chosen to include variations in other characteristics, such as the growth in bankruptcy filings, the split between chapter 7 and chapter 13 filings, and state-specific asset exemption levels for chapter 7. Indeed, the Center report showed that the courts that were included differed considerably in the total number of filings, the proportion that were chapter 7 and chapter 13 personal bankruptcy filings, and the change in the total number of filings from 1995 to 1996. Neither the court locations nor petitions were chosen with the objective of identifying the range of debts—lowest to highest—that bankruptcy debtors could repay. The total number of personal bankruptcy petitions filed in 1996 varied greatly among the 13 court locations. To account for this fact, the Center report stated that the sample was weighted so that the report’s weighted estimates that combined information from all locations represented the total filings from these 13 court locations. This means that the Center report’s estimates were strongly affected by those court locations that had the highest number of personal bankruptcy petitions filed in 1996. For example, about 41 percent of all 1996 chapter 7 filings in the 13 court locations were from Chicago and Los Angeles. The 17 percent of the sampled chapter 7 filings from Chicago and Los Angeles were therefore inflated to correctly represent the relative size of the Chicago and Los Angeles locations among the 13 locations. All of the Center report’s weighted estimates, including those labeled as national estimates, were weighted to represent only these 13 locations. The Center report’s authors provided us with data, not included in the report, that indicated that the predicted abilities of those who filed for chapter 7 personal bankruptcy to repay debts varied considerably among the 13 court locations (see table 3). For example, the percent of chapter 7 debtors whom the report determined had some income available to repay debt ranged from a low of about 32 percent in San Diego to a high of about 67 percent in Dallas. The considerable variation among locations indicates that the repayment rate at other locations, and for the nation as a whole, could differ from the combined, weighted estimate for these 13 locations. The Center report’s authors stated that its results cannot be generalized to all personal bankruptcy petitions filed nationally because the sample was not designed for this purpose. Consequently, the national estimates presented in the conclusion of the Center report are not supported by the report’s study methods. The Center report states that the sampling procedures used to select petitions from the 13 court locations resulted in a sample that was representative of all petitions filed in those locations. From our review of available information on the report’s sample design, we have determined that statistical probability sampling methods were not used to select the petitions filed within each court location. The Center’s petitions were gathered from several months and generally included the petitions filed in the first few days of the months of May and June (eight locations); June only (three locations); or July only (one location) of 1996. In the remaining location, the petitions were selected by the clerk of the bankruptcy court during April, May, and June 1996. Because the sample procedure for selecting filings within bankruptcy court locations was not random, the characteristics of the petitions drawn may be systematically influenced by variation in the types of filings that can occur (1) in different months throughout the year and (2) for days within the month. Consequently, standard statistical sampling methods cannot be used to determine whether the results in the Center report were likely to be representative of all bankruptcy filings in each of the 13 court locations. The Center report evaluated the possibility that the petitions from May to July that were included in the analysis might differ from those filed during other months of the year by examining supplementary data for other seasons from Indianapolis. On the basis of the Indianapolis analysis, the authors conclude that “a concern that seasonal differences in petitions could lead to an overstatement of the ability to repay debt across all petitions filed during 1996 is unwarranted.” The Center report provided no basis for judging whether the lack of monthly variation in Indianapolis could be expected in all 13 court locations. The petitions within each court location were not selected from filings over complete monthly periods and, therefore, could be affected by variations in the characteristics of petitions filed at different times of the month. In a few court locations, because of especially high filing volumes, the sample quotas were reached in the first day or two of the month. For example, our analysis of the Center’s data showed that about 95 percent of the petitions selected in Dallas and Houston, Texas, were filed by the third day of the month. At both of these locations, the petitions drawn had been filed prior to the first Tuesday of the month, the date on which mortgages are foreclosed in Texas. Thus, the petitions used from these two locations may have included a disproportionate number of debtors who sought to avoid mortgage foreclosures under chapter 13. The income and expenses for such filers may vary from those of debtors who filed in these locations later in the month. The report’s authors told us that they planned to sample additional petitions in Dallas and Houston to examine this possibility. The comments and observations in our report are based on a review of the final version of the Center report, dated October 6, 1997; some additional information we requested from the report’s authors; data and analyses provided by the Federal Judicial Center (FJC) on bankruptcy filings in the 13 locations used in the Center report; telephone interviews with bankruptcy judges and trustees; and our experience in research design and evaluation. On November 13, 1997, we met with Professor Michael Staten, coauthor of the report, to discuss our questions and observations about the report. Following this meeting, Professor Staten and his coauthor, Professor John Barron, provided additional information about the report’s methodology and some additional data that we requested. We received the last of these data on December 23, 1997. The authors declined to provide a copy of the automated database used for their analysis, citing their interest in retaining its proprietary value. The team that reviewed the report included economists from our Office of Chief Economist and specialists in program evaluation, statistical sampling, and statistical analysis from our General Government Division’s Design, Methodology, and Technical Assistance group. We did our work principally between October 1997 and January 1998 in Washington, D.C. Professors Michael E. Staten and John M. Barron, authors of the Center’s report, provided written comments on a draft of this report. (See app. I.) The authors discussed each of the report’s five areas of concern that, together, led to our conclusion that additional research and clarification would be needed to confirm the accuracy of the Center’s report’s conclusions regarding the proportion of debtors who may have the ability to repay at least a portion of their nonpriority, nonhousing debts and the amount of debt such debtors could repay. In discussing the five areas of concern, the authors agreed with some concerns but believed that other concerns were either overstated or unwarranted. Their specific comments on the concerns raised in this report are discussed and evaluated at the end of appendix I. We focus here on the authors’ major comments and our evaluation of those comments. Basically, the authors disagreed with us over the implications of the concerns we raised. They believe that the sample of bankruptcy cases they examined was large enough and was taken from a sufficiently varied cross-section of cities and courts to (1) reveal a significant number of chapter 7 petitioners with some capacity to repay their debts and (2) suggest a need for policymakers to reexamine whether the current bankruptcy statutes should be changed. They also believe that determining debtors’ ability to pay their eligible dischargeable nonhousing debts, which the Center report did not do, was an interesting but unimportant side issue. Although they agreed that it would be difficult to use their results to estimate with any precision the repayment ability of chapter 7 debtors outside of their sample, they believed that their sample results, regardless of the concerns we found, strongly suggest a widespread substantial repayment capacity. They provided additional data and analysis, not included in the Center’s report, on reaffirmations of secured nonhousing debt to further support their conclusions. We continue to believe that the concerns we found strongly suggest that additional research and clarification are needed to determine the accuracy of the Center report’s conclusions regarding the proportion of debtors who may have the ability to repay at least a portion of their nonhousing debts and the amount of debt they could potentially repay. We note in this regard that the Credit Research Center is currently conducting additional research with its bankruptcy database, and the accounting firm of Ernst & Young is conducting a study to address the concerns we raise in this report. The Center commented that the study clearly indicates a widespread and “substantial” repayment capacity across all 13 locations in the study. We agree that the data and indicators used by the Center showed that the percentage of debtors in each location with at least some positive net income available for debt repayment was not so small as to be negligible. However, the assumptions, data, and sampling procedures used in the Center report raise questions concerning the accuracy and usefulness of the report’s estimates and require the reader to use caution in interpreting the types of firm conclusions found in the Center report. For example, the Center’s estimate of the percentage of debtors who have at least some capacity to pay included all debtors whose monthly net income after expenses was greater than zero, whether that amount was $1 or $1,000. We were not able to conclude, as the Center did, that there is a “substantial” repayment capacity in every city because (1) we do not have a basis for determining how much repayment capacity should be considered substantial; and (2) we cannot conclude that the petitioners’ net income, as derived from data in their initial schedules, can be accepted as an accurate estimate of debtors’ net income available for debt repayment for the following 5 years. Several factors suggest to us that those debtors with at least some capacity to pay would not be able to repay as much debt as the Center report assumed. For example, historically only about one-third of chapter 13 debtors have completed their repayment plans, suggesting that for two-thirds of debtors something changed between the time the plans were confirmed by the bankruptcy court and the time the actual repayment plan was to be successfully completed. To the extent that debtors are unable to maintain their debt repayments for the full 5-year period assumed in the report, the amount of debt repaid would be less than that assumed in the report. In addition, the Center’s estimates of repayment capacity do not include any provision for the administrative costs of administering a repayment plan. In fiscal year 1996, 14 percent of the payments from chapter 13 debtors was used to pay administrative and legal costs. The Center report provided an estimate of the potential repayment capacity of debtors who have filed for bankruptcy to pay their nonpriority, nonhousing debts. We do not agree with the Center that identifying the universe of dischargeable debts that a debtor may have the capacity to repay is an interesting, but unimportant, side issue in assessing a debtor’s ability to repay his or her nonhousing debts. It is the debtor’s total eligible dischargeable debts that represent the potential loss to creditors if the bankruptcy court grants the debtor a discharge of all his or her eligible dischargeable debts. The Center report did not attempt to identify this universe of debts in its analysis. Creditors are not at risk in the bankruptcy process for debts that are nondischargeable or debts that the debtor reaffirms. Similarly, creditors are not at risk through the bankruptcy process for the dischargeable debts of those debtors whose bankruptcy cases are dismissed. With few exceptions, these debtors remain personally responsible for all their debts. The relevant universe of debtors who pose a risk of nonpayment to creditors through bankruptcy are those who complete the bankruptcy process and receive a discharge of all or part of their eligible dischargeable debts. The Center report did not attempt to estimate the capacity to pay of this universe of debtors. Instead, the Center’s assessment of capacity to pay included those debtors who may have received a discharge plus those debtors whose cases were dismissed and did not receive a discharge. Consequently, the Center report’s universe of debtors included debtors who remained responsible for their eligible dischargeable debts because their cases were dismissed. The Center agrees that the Center report’s findings were not based on data from a nationally representative scientific, random sample. The Center comments that the researchers did not intend to obtain a nationally representative sample and that much useful information can come from samples that are not nationally representative. Although decisions with nationwide implications could be based on evidence from selected locations, we believe that the assumptions, data, and methods used in the Center report require that its conclusions—which, in some cases, are stated as broad national estimates—be interpreted with caution. The additional data provided in the comment letter are helpful; but, as discussed in our comments at the end of appendix I, we did not have the database used for these analyses to verify the results. More importantly, these new data do not resolve many of the concerns we raise in this report. For example, the weighting methodology used to develop the weighted estimates presented in the new tables is the same methodology used for the Center report’s other estimates and is subject to the same limitations we discussed in our report. As with the Center’s other estimates, the assumptions used in the new analyses assumed that 100 percent of debtors’ discretionary income and 100 percent of the proceeds from the sale of the debtors’ nonexempt assets would be used to repay debt. In practice, administrative costs would reduce the amount paid to creditors. Thus, notwithstanding the comments and additional information provided by the Center report’s authors, we continue to believe that more research would be needed to verify and refine the Center report’s estimates of debtors’ repayment capacity to better inform policymakers. We are providing copies of this report to the Chairman and Ranking Minority Member of the Senate and House Committees on the Judiciary; the Chairman and Ranking Minority Member of the Subcommittee on Commercial and Administrative Law, House Committee on the Judiciary; and to the authors of the Credit Research Center report. We will also make copies available to others upon request. If you have any questions, please call me at 512-8777. The following are GAO’s comments on specific issues included in the letter dated January 21, 1998, from Professor Michael Staten, on behalf of himself and his coauthor, Professor John Barron. Other issues discussed in the letter have been included in the report text. 1. The authors agreed that there is a need to validate debtors’ income, expenses, and debts in developing assumptions of future income and expenses but stated that a researcher currently has no recourse but to accept what the debtor advises the court under oath at the time the petition is filed. We understand that researchers must use the best available data and that, currently, verifiable data on debtors’ income and expenses during bankruptcy have not been developed. However, our intent was to indicate that the Center report should have discussed how the use of data from debtors’ initial schedules could affect the Center report’s results and, thus, how those results should be used. In this case, it seems the researchers could have used more recent data, at least for some debtors, because debtors may amend their initial schedules at any time prior to the final disposition of their bankruptcy cases. Such amendments could alter the estimated income, estimated expenditures, and debts that debtors reported on their initial schedules. We recognize that obtaining these amended schedules would have required additional time and resources. However, we believe that the importance of these data to the overall conclusions in the Center’s report would justify such an effort. The authors also said that to the extent there is a bias in the debtors’ initial schedules, it would be expected that the debtors would understate their capacity to repay debt. Although this may seem logical at first glance, it is important to note that there are no empirical data on the accuracy of the data reported in debtors’ initial schedules. Nor is there any empirical basis for assuming that debtors would consistently attempt to understate their capacity to pay their debts. In fact, there is no empirical basis for assessing whether debtors generally overstate or understate their capacity to repay on their initial schedules or the general amount of the overstatement or understatement. There may be several reasons why some debtors would actually overstate their capacity to pay. For example, some people may simply not want to admit how serious their financial situation has become in order to protect certain assets. Also, mistakes could be made in the schedules used in the Center’s analysis, which are not easily interpreted by debtors who might proceed without legal or financial assistance. For example, in Los Angeles, a location whose data contributed significantly to the Center’s final weighted estimates, Center data showed that about one-third of debtors reported they had no lawyer. Through mistakes in filling out the schedules, debtors could report information that would have the effect of either overstating or understating their capacity to pay their debts. 2. The Center stated that its calculations provided a benchmark of debtors’ ability to pay that could easily accommodate whatever assumptions about possible income changes the reader wished to make. It also agreed with us that incorporating a cushion into a chapter 13 repayment plan to guard against income interruptions or unexpected expenses seemed to be a prudent step. We agree that the Center report provided a baseline estimate of debtors’ ability to pay that would change as the report’s basic assumption—that debtors’ income and expenses would remain unaltered for 5 years—changes. However, the Center provided no estimates based on alternative assumptions of repayment capacity, and without the Center’s database, it is not possible for anyone to estimate the effect of such alternative assumptions on the Center report’s estimates of debtors’ potential repayment capacity. Since many economic factors can change in a debtor’s financial situation during 5 years, it would seem prudent to base any policy decisions on a wider range of assumptions than the somewhat optimistic set of assumptions used in the Center study. For example, the assumption that debtors’ reported income and expenditures would remain unchanged for 5 years had the effect of providing optimistic estimates of debtors’ repayment capacity in two ways: (1) it did not allow for situations where the debtors’ income decreases or expenses increase, thus discretionary income available to pay debt was assumed to remain unchanged for 5 years; and (2) 100 percent of this discretionary income was assumed to be used for 5 years to repay debt, when in fact a portion of the debtors’ discretionary income would be used to pay the expenses of administering the debtors’ repayment plans. There is some additional evidence that the Center’s assumption that debtors’ income and expenses would remain unchanged for 5 years may be optimistic. For example, the AOUSC report discussed on page 8 of our report showed that only about 36 percent of chapter 13 debtors completed their repayment plans. The reasons for this low completion rate are unknown, but it illustrates the high level of discrepancy between the amount that debtors could potentially repay, based on the data and assumptions used in the Center report, and what has actually occurred over a 10-year period. In addition, in virtually all cases, creditors do not receive 100 percent of debtors’ payments under chapter 13 repayment plans. Fiscal year 1996 data from the Executive Office of U.S. Trustees showed that 14 percent of payments were used to pay the debtors’ lawyers, the chapter 13 trustees’ statutory operating expenses in administering the plans, and other administrative expenses. 3. The Center said that it believes it has clearly identified the universe of debts for which it estimated debtors’ ability to pay as all “debts not secured by real estate, without drawing a distinction between secured vs. unsecured, priority vs. non-priority, or dischargeable vs. non-dischargeable.” The Center commented that the distinction between dischargeable and nondischargeable debts is simply “an interesting side issue.” The Center said that such distinctions between categories of debt were not necessary if the report’s intent was to assess debtor’s overall ability to meet their obligations. The Center also said that unsecured priority debt was not included in the base of total unsecured debt for many of the repayment calculations, because the report assumed that unsecured priority debt would be paid before unsecured nonpriority debt. We do not agree that the distinction between dischargeable and nondischargeable debt is just an interesting side issue. The distinction is important if the Center’s data are to be used for considering the need to alter existing bankruptcy statutes. It is the debtor’s total eligible dischargeable debts that represent the potential loss to creditors if a debtor is granted a discharge of his or her eligible dischargeable debts. The Center did not attempt to identify this universe of debts in its analysis. Creditors are not at risk in the bankruptcy process for debts that are nondischargeable in bankruptcy or for eligible dischargeable debts that the debtor reaffirms. Total dischargeable debts are total debts less total nondischargeable debts. As discussed on pages 9 and 10 of our report, the Center report may not have fully identified all eligible dischargeable debts, because it excluded data on unexpired leases from Schedule G, such as automobile leases. Thus, the Center did not identify that universe of debts for which creditors are at risk in the bankruptcy process. As we note on pages 9-10 of our report, to assume that all unsecured priority debts would be fully paid over 5 years but that no other class of nonhousing debts would be fully paid creates a disparity in the treatment of nonhousing debts that does not reflect actual bankruptcy practice. In chapter 13 repayment plans, secured debts would ordinarily be paid before or concurrently with unsecured priority debts. Consequently, the Center report’s calculations did not provide an estimate of the amount of unsecured nonpriority debt that could be repaid. If the Center report’s purpose was simply to identify debtors’ overall ability to pay nonhousing debts from net income after reported expenses, then the report should have included unsecured priority debt with all other nonhousing debts—secured and unsecured—and calculated debtors’ ability to pay the resulting total nonhousing debt. 4. The Center agrees with us that its calculations of debtors’ ability to repay their nonhousing debts did not consider the payments required to pay the nonhousing debts that debtors stated it was their intent to reaffirm (repay). The Center notes that the February 1997 testimony of law professors Marianne Culhane and Michaela White before the National Bankruptcy Review Commission stated that about 50 percent to 60 percent of intended reaffirmations (the data used in the Center report) actually result in signed reaffirmation agreements in which debtors reaffirmed their debts. Thus, they noted it is possible that the number of final reaffirmations could be less than that reported in debtors’ statements of intent. We agree that the number and dollar value of debts that debtors ultimately reaffirm could be more or less than those found in debtors’ statements of intent. We believe that this further supports our overall conclusion that the results in the Center’s report should be viewed with caution. In September 1997, professors Culhane and White reported updated results of their study, which were based on debtor reaffirmations in only 7 of the 90 bankruptcy districts, and thus must be considered illustrative, not conclusive. Nevertheless, the reaffirmation report’s findings provide additional evidence that one should be cautious in interpreting conclusions based solely on debtors’ initial schedules, such as schedules of income and expenses as well as reaffirmations. For example, the reaffirmation report found that debtors filed fewer reaffirmations than indicated in their statements of intent and that the debts that debtors ultimately reaffirmed were often quite different from those that debtors stated it was their intention to reaffirm. The reaffirmation report and the Center’s data indicated that debtors rarely stated their intention to reaffirm unsecured debts. However, the reaffirmation report found that debtors in fact ultimately reaffirmed unsecured debts as well as debts that were not listed in their initial schedules at all. The reaffirmation report also noted that court records provide an incomplete picture of reaffirmations, because debtors may also sign reaffirmations with creditors that the creditors fail to file with the court, as required. In addition, the reaffirmation report reinforces our concern that local court bankruptcy practice and rules may affect the data that debtors report on their initial schedules and in the data found in debtors’ court files generally. For example, the reaffirmation study found that the number of final reaffirmation agreements filed with the bankruptcy court in each district appeared to be affected by governing court decisions for the districts studied. In two districts, the debtor could keep property, such as a car, by simply maintaining ongoing contractual payments on the property. Thus, it was not necessary for the debtor to file a reaffirmation agreement with the court in order to keep the property. In two other districts, court decisions required the debtor to file a reaffirmation agreement or surrender or redeem the property. The number of final reaffirmation agreements was lower in those districts that did not require a reaffirmation agreement in order for the debtor to keep the property. However, the report said that the data did not permit an empirical evaluation of the extent to which such controlling court decisions affected the number and type of reaffirmations that debtors in the report ultimately filed with the bankruptcy courts. 5. The Center comments included data and analyses, not previously provided, that the Center said address the impact of reaffirmations on debtors’ ability to pay their nonhousing debts. These new analyses are based on weighted data for the 13 locations included in the Center’s study. We cannot assess the accuracy of the data in the tables because we do not have the database used to develop these tables and, therefore, cannot replicate how the new estimates were derived. However, we do have some overall observations on these new data. First, the weighted data are based on the same weighting methodology used for the Center report’s other estimates and, therefore, are subject to the same limitations of that weighting methodology that we noted in our report. The weights are heavily influenced by filings in two locations—Chicago and Los Angeles—which accounted for about 41 percent of all bankruptcy filings in the 13 locations. Second, the tables presented in the comments need clarification in their presentation. For example, table 1 of the comments does not indicate that all dollar amounts in the table are averages, which they are. The table also does not clearly indicate that the amount of nonhousing debt shown is the total amount of nonhousing debt—secured and unsecured—less unsecured priority debt. Third, the assumptions underlying the data in table 2 are not explained. For example, line “D” of table 2 is supposed to represent the amount of unsecured nonpriority debt that could be paid over 5 years from future income after liquidating all of the debtor’s nonexempt property, if any. The calculation appears to assume that (1) when surrendered and liquidated, the collateral would bring 100 percent of the value of the collateral, as listed in the debtor’s initial schedules; and (2) 100 percent of the proceeds realized from the liquidation would be used for repaying the debt secured by the collateral. We found no basis for either of these assumptions. For example, when a debtor’s nonexempt assets are liquidated to pay creditors, the asset may bring more or less than the value of the collateral as listed in the debtor’s schedules. Moreover, there are usually expenses associated with liquidating a debtor’s nonexempt assets, such as statutory bankruptcy trustees’ commissions and appraiser or auctioneer fees. Such expenses would reduce the amount paid to creditors because these costs would be paid before any remaining proceeds were distributed to creditors. The data in the new tables are subject to the same limitations as other estimates of debtors’ ability to pay included in the report. The tables are based on the assumptions, used throughout the report, that debtors’ income and expenses would remain unchanged over a 5-year period and that 100 percent of a debtor’s discretionary net income will be used for debt repayment. As previously discussed, both logic and available evidence would suggest that these are not realistic assumptions. For example, the Center provided us data, not included in its report, which showed that the majority of nonhousing secured debt was vehicle debt. The data in the new tables 2 and 4 provided with the Center’s comments assumed that the debtor’s automobile would be sold, and no replacement obtained. The absence of an automobile could very well affect a debtor’s employment and, thus, a debtor’s future stream of income. 6. The Center’s comments noted that it would be difficult to estimate with precision debtors’ ability to pay their nonhousing debts in any location other than the 13 locations included in the Center report. On the other hand, the Center concluded that debtors’ data in all 13 locations showed a substantial repayment capacity, despite the great diversity in the characteristics of the 13 locations, such as unemployment rates and the percent of total personal bankruptcy cases that were chapter 7 cases. The Center stated that this showed that substantial repayment capacity is a widespread phenomenon, whether or not the report’s findings are applicable to other locations. We agree that the Center’s data show that some debtors who file for bankruptcy under chapter 7 may have some capacity to repay their debts. But, from a policymaking standpoint, the more relevant questions are whether the Center report’s findings provide a reasonable estimate of that repayment capacity and whether the Center’s defined universe of debtors and debts used to estimate repayment capacity was appropriate for assessing the need for a change in current bankruptcy laws. As previously discussed, we believe the Center’s universe of both debts and debtors may not be the appropriate ones for assessing whether current bankruptcy statutes should be changed. In answering these questions, it is also important to note that the data used for the Center report were based on information debtors provided at a single point in time—the time they filed for bankruptcy—regardless of whether or not they completed the bankruptcy process and received all or part of the relief they sought in filing for bankruptcy. Thus, the report included data from debtors who may have withdrawn their petitions voluntarily, had their petitions dismissed by the court, or who received bankruptcy court discharges of all or part of their eligible dischargeable debts. For example, in Los Angeles, of those chapter 7 petitions filed on the same days of May and June 1996 as those petitions used in the Center sample, about 5 percent had been dismissed by September 30, 1996. For chapter 13 petitions, more than 30 percent had been dismissed during the same period. In contrast, not more than about 4 percent of chapter 7 and 13 petitions in San Diego had been dismissed within 90 days. Because the report’s findings include debtors who did and did not receive a discharge of their eligible debts, the report’s findings cannot be used to reach conclusions about the most relevant public policy question—the potential ability to pay of debtors who received a discharge of all or part of their eligible dischargeable debts. 7. The Center agreed with our general conclusion that scientific, random sample methods were not used to select the bankruptcy petitions used in the Center’s analysis. However, the Center said that the lack of a scientific, random sample did not necessarily diminish the usefulness of the Center report’s findings. The Center commented that it did not intend to obtain a nationally representative probability sample and agrees that it did not use a scientific random sampling methodology to select the 13 bankruptcy locations or the bankruptcy petitions used in the analysis. The Center also states that most social science research is conducted with samples that are not nationally representative probability samples and concludes that much useful information comes from samples that are technically less ambitious than the standard that we applied. Our evaluation assumed that the Center report may be used for important policymaking on a national scale. As a result, we believe that it is appropriate to inform the Committee that the Center report’s data do not meet scientific standards for estimating the characteristics of bankruptcy debtors for the United States as a whole or for all bankruptcy debtors in each of the 13 locations. 8. The Center discussed our observations on its methods of selection for each of the three steps at which petition selections were made without probability selection methods. The Center agrees with us that the 13 locations were not selected using probability selection techniques and, thus, may or may not be representative of the remaining courts in the United States. The Center commented that nonprobability samples have been used in some previous studies of bankruptcies, including a study by GAO, and that the purpose of the Center study was to form and test hypotheses about potential causal factors. The Center also stated that the study has potential value for policymakers because the large sample from a varied cross-section of courts identifies significant numbers of petitioners with some capacity to repay debts and because there was a finding of substantial repayment capacity in every city in the study despite the great diversity in city/court characteristics. We agree that this diverse set of 13 locations demonstrates that based on the data and assumptions the Center used, the Center’s indicators of debtors’ ability to repay debts are found at greater than negligible rates at all locations. However, we also concluded that users of the Center data should consider the variation among locations and the lack of a national estimate as limitations. The important variations between the studied cities might be of importance for some policy purposes. In addition, we cannot confirm the Center’s conclusion that there is a “substantial” repayment capacity in every city, because we do not have a basis for determining how much repayment capacity should be considered to be substantial and because, as explained above, we can not conclude that petitioners’ reports of income on bankruptcy petitions can be accepted as an accurate estimate of income for the following 5 years. 9. The second sampling issue on which the Center commented was what the Center referred to as seasonality—the fact that the Center’s petitions were filed in the spring and summer months and might not be representative of petitions filed at other times of the year. The Center stated that ample evidence from previous researchers and supplemental testing in the Center study suggest that the potential bias from focusing only on cases filed in the spring and summer months is negligible. The Center cited as evidence the findings from two previous studies and the Center’s analysis of Indianapolis petitions from its current study. The Center and we agree that the study petitions were drawn from a limited part of the year. We did not have a sufficiently strong basis to conclude that seasonality factors could or could not have possibly affected the Center report’s estimates of the debtors’ repayment capacity. The two previous studies cited in the Center’s comments do not address the effect of season on the debtors’ ability to repay debts but only examined the number of chapter 7 and chapter 13 bankruptcy filings by season. The results of the Center’s analysis do show that season of the year did not affect estimates of the ability to pay in one city, Indianapolis. We agree that we do not have a strong theoretical reason for expecting a seasonal effect. However, in the absence of evidence from more than one location, and in view of the fact that the present study is strongly concentrated by season, we continue to believe that the season in which the petitions were selected should be considered a limitation in interpreting the results from the study. 10. The third sampling issue addressed in the Center comments was the time of the month from which the Center’s petitions were drawn. The Center agrees that the bankruptcy petitions used in the study were generally drawn from days early in the month. The Center explains that the petitions were drawn from the beginning of the month to maintain tight control over the petition selection procedure and to minimize uncertainty about the characteristics of cases that were not studied. The Center maintains that Texas is the only one of the 13 locations where there is evidence or reason to believe that cases early in the month might differ from those late in the month. In Texas there were a disproportionate number of past-due home mortgage chapter 13 petitions early in the month, and the Center said it was now drawing additional cases in Texas. In addition the Center notes that although chapter 13 petitions might differ by time of month in Texas, it is not clear why the characteristics of the chapter 7 cases would differ over the month. The Center also notes that the study did not find differences in the values of variables that measure ability to pay at the one location, Indianapolis, that could be tested with the data from the Center study. We believe that the differences among petitions at different times of month in Texas should be considered. Those debtors who file for chapter 13 early in the month to prevent a mortgage foreclosure may have different financial characteristics from chapter 13 debtors filing later in the month. The only clear evidence of the absence of a time-of-month effect comes from a single court in Indianapolis. We are concerned that there may be other court-specific factors of which we are unaware. For example, working with the Federal Judicial Center, we learned that mortgage foreclosures early in the month also could affect the type of filings early in the month in Atlanta. Although the Atlanta filings for this study did not happen to be concentrated at the beginning of the month, the Atlanta example indicates that it is difficult to exclude time-of-month effects. Thus, we believe that the lack of representativeness by time of month should be considered in evaluating the study. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Credit Research Center report on personal bankruptcies, focusing on the report's research methodology and formula for estimating the income that debtors have available to pay debts. GAO noted that: (1) overall, the Center report represents a useful first step in analyzing the ability of bankruptcy debtors to pay their debts; (2) because there is little empirical basis on which to assess the accuracy of the data used in the report's analysis, and because the data provided by the authors showed considerable variation among the 13 locations used for analysis, the report's general findings must be interpreted with caution; (3) GAO's review of the Center report suggests that additional research and clarification would be needed to confirm the accuracy of the report's conclusions regarding the proportion of debtors who may have the ability to repay at least a portion of their nonpriority, nonhousing debts; and (4) there were five areas of concern with the Center's report that could affect interpretation of the report's conclusions: (a) the report's assumption's about the information debtors provide at the time of filing bankruptcy regarding their income, expenses, and debts and the stability of their income and expenses over a 5-year period were not validated; (b) the report did not clearly define the universe of debts for which it estimated debtors' ability to pay; (c) payments on nonhousing debts that debtors stated they intended to reaffirm--voluntarily agree to repay--were not included in debtor expenses in determining the net income debtors had available to pay their nonpriority, nonhousing debts; (d) the report presented results based on data from all 13 locations combined and provided little discussion of the considerable variation among the 13 locations used in the analysis; and (e) a scientific, random sampling methodology was not used to select the 13 bankruptcy locations or the bankruptcy petitions used in the analysis.
Under the authority of the Arms Export Control Act, State requires exporters to obtain licenses for defense exports unless an exemption applies. State has long exempted the export of many unclassified defense items to Canada without prior department approval. While these items are exempt from licensing requirements, they are still subject to the provisions of the Arms Export Control Act. Exporters who use the exemption and violate any provisions of the Act are subject to fines, penalties, or imprisonment, if convicted. State requires exporters to register with its Office of Defense Trade Controls; determine whether the articles or services they are exporting are covered by the exemption; in most cases, obtain written documentation stating that exports are to be used for a permitted purpose; and inform the recipient that items are not to be re- exported without prior authorization from State. The Canadian exemption, first codified in 1954, grew out of the unique geographic relationship and strong economic trading partnership between the United States and Canada and their mutual interest in the defense of North America. The two countries share the world’s longest unfortified border. They are also each other’s largest trading partner. The countries are committed to maintaining a strong integrated North American defense industrial base to help fulfill their defense and security responsibilities to the North Atlantic Treaty Organization and the North American Aerospace Defense Agreement, as well as for common defense of national territories. Appendix I provides a chronology of selected defense and economic agreements between the United States and Canada since 1940. The Canadian exemption has evolved since inception in terms of scope. For example, earlier versions allowed the export and import of arms, ammunition, and implements of war, and the export of unclassified technical data without a license. Later versions changed the coverage to include defense services and increased the types of items requiring a license. In April 1999, State revised its regulations to clarify when the exemption could be used and limited the defense items that could be exported under the exemption. State took this action based on its analysis that exports were being re-exported from Canada to countries of concern without U.S. government approval and that controls over arms and ammunition transfers needed strengthening. Nineteen criminal investigations and seizure cases related to the Canadian exemption were identified, including 3 diversions to China, Iran, and Pakistan and 16 attempted diversions to these and other nations of concern or technical regulatory violations. For example, a major U.S. defense company exported U.S.-controlled communication equipment to its Canadian facility under the exemption and then re-exported the equipment to Pakistan without U.S. government approval. In another case, an Iranian intelligence group established a company in Canada and was accused of attempting to use the exemption to acquire U.S.-controlled components for the Hawk missile system. Appendix II summarizes these and other cases. In addition, State received 23 voluntary disclosures from exporters who inappropriately used the Canadian exemption. For example, in a few instances, exporters admitted providing technical manuals and software engineering support without obtaining State approval. State consulted with the Department of Defense about these cases and indicated that had these exporters submitted the appropriate license applications, they would have likely been denied. In another instance, an exporter submitted a voluntary disclosure after being contacted by law enforcement officials. In this case, the U.S. exporter was ineligible to export because the firm was debarred and under criminal investigation for diversions of military equipment to Iran and other locations. In response to export concerns identified by State, the U.S. and Canadian governments negotiated changes to their respective export control systems. The Canadian government changed its export control laws and regulations to cover all items currently controlled on the U.S. Munitions List, established a registration system in April 2001 for persons and entities in Canada eligible to receive U.S.-controlled items under the exemption, and required U.S. government approval for re-export of U.S. controlled items from Canada or transfer of these items within Canada. In turn, State again revised the Canadian exemption effective May 30, 2001. This revision in general broadened the exemption to cover temporary imports of unclassified defense items, some defense services, and some additional items, but continued to exclude other items such as Missile Technology Control Regime items. Since the conclusion of the negotiations, the U.S. and Canadian governments have met and continue to exchange information on enforcement and border security issues not fully addressed during the negotiations. We found instances where exporters have been implementing the Canadian exemption inconsistently. Some items, such as technical data and smokeless ammunition powder, are being exported under the exemption by some exporters and not by others. These inconsistencies may result in the same item being licensed by State in some instances and not licensed in others, which may put some exporters at a disadvantage and lessen government oversight of exports. For example, exporters followed different processes when they exported technical data. Before April 1999, some exported technical data under the exemption for offshore procurement activities, based on their interpretation of the regulations. However, State officials said that exporters were required to obtain approval before they could export the data to Canada. In April 1999, State revised the regulations to clarify that the export of technical data for offshore procurement activity requires a license. After this regulatory revision, a number of companies or their subsidiaries voluntarily disclosed to State that they had inappropriately exported technical data or defense services for offshore procurement and other activities using the exemption. Since that time, some exporters said they were unclear about when they could export technical data and defense services under the May 2001 revised exemption because the language in the regulations was subject to interpretation. For example, we were told that design data under the new exemption was broadly defined, and in some instances has been interpreted as either subject to the exemption or requiring a license. Appendix III highlights the complexities of the regulatory language and major changes made to the Canadian exemption in recent years. Some exporters have been interpreting the May 2001 reporting requirements differently. This, in turn, can decrease the government’s visibility over sensitive exports. For example, exporters using the Canadian exemption are now required by the International Traffic in Arms Regulations to provide State “a semi-annual report of all their on-going activities authorized under this section.” Two exporters interpreted the phrase “under this section” to mean that the requirement solely pertained to defense services because it fell under the paragraph of the regulations entitled “Defense services exemption.” These exporters, therefore, reported activities exclusively involving defense services. Another interpreted the language to refer to all activities occurring under the Canadian exemption, including those related to defense articles and technical data, as well as defense services. When we discussed this matter with State officials, they said that the second interpretation was correct and that the report should encompass all activities and not just defense services. Some exporters also said they were unclear about certification requirements. The regulations require exporters to obtain written certification from Canadian companies that “the technical data and defense service being exported” will be used only for a specified activity.Some exporters said they obtain this certification when exporting defense articles in addition to technical data and defense services, following preface language in the Federal Register notice that changed the regulation in May 2001. One company we spoke with said that it only obtains this certification for defense services. Another company noted that when it tried to get a Canadian company to fill out the certification for a defense article being exported to Canada, the Canadian government told the company that only the regulations are legally binding, and certifications should be provided only for defense services and technical data. State officials said that this interpretation was correct, and the certification only needed to be obtained for defense services and technical data. It is important that exporters correctly interpret these requirements, since the certifications enable exporters to document their compliance with regulations. The effectiveness of the process depends on the exporters making the right decisions when interpreting regulations. However, State has an important role to play in responding to inquiries about exports. In some instances, we found that State provided inconsistent answers in situations where exporters or U.S. Customs officials responsible for enforcing export regulations raised questions to State about particular situations. For example: One exporter who had been shipping smokeless ammunition powder using the exemption was stopped by U.S. Customs for inspection on several occasions. Each time, U.S. Customs asked State if the exemption could be used. The first two times, State said yes, but on a subsequent occasion, it determined the powder was a Missile Technology Control Regime item and required an export license. This exporter has since obtained numerous licenses for this item. Another exporter shipping the same type of powder to Canada was also stopped by U.S. Customs for inspection. One time, State told U.S. Customs that the powder required a license and another time said the item was not a Missile Technology Control Regime item and was, therefore, exempt. One exporter had planned to temporarily import aerial target aircraft for North Atlantic Treaty Organization (NATO) ongoing training exercises that were being conducted in the United States and informed U.S. Customs in advance that it was going to do this under the exemption. Under the changes made to the exemption in May 2001, such items are allowed to be imported temporarily. However, in this case, State denied the use of the exemption. State officials acknowledged this mistake to U.S. Customs, and told us that it occurred inadvertently, immediately after the exemption change. Nevertheless, at the time, the exporter cancelled its plans, which in turn, led to the cancellation of the remaining NATO training exercises. An exemption places the burden of proper implementation on exporters. Nevertheless, exporters said they needed guidance from State to assist them through the process of deciding what to export and what not to export under the exemption, as well as what activities to report. State officials said that exporters are to rely on the regulations as their guide. However, State has recently provided additional guidance through a question-and-answer guide the department prepared with industry to answer common questions about the May 2001 revisions to the exemption. This guidance answers some questions but does not lay out specific, clear criteria for deciding what is allowable under the exemption. State has also provided outreach in conjunction with the Canadian government on changes associated with the Canadian exemption and regularly sponsors additional training through the Society for International Affairs. In March 2002, State began an in-house training program on export licenses and agreements. Further, State issues advisory opinions on specific exports when requested by the exporter, but such opinions are specific to a particular export and are revocable. Clear and commonly understood guidance may help State and U.S. Customs officials answer questions that surface during inspections. Although an item may be exempt from State review and approval, it is still subject to U.S. export control law. Under an exemption, the burden for reviewing the legitimacy of the transaction shifts from State to the exporter. Therefore, a large part of the compliance and enforcement process under the Canadian exemption relies on the actions of exporters. While the U.S. government has some mechanisms in place to ensure that exporters are ultimately complying with export law and regulations, the government faces limitations in using these mechanisms. For example, export documentation is not always submitted or complete, border inspections are limited, and violations are difficult to prosecute. U.S. Customs officials cite other priorities and lack of staff and other resources as reasons for limitations in enforcement. Without more effective enforcement, the U.S. government is at greater risk of defense items being exported inappropriately. U.S. government enforcement mechanisms for defense exports are carried out by State and U.S. Customs. State encourages exporters to develop their own compliance programs and to voluntarily disclose when they have violated the exemption. State may direct a company to perform an internal control compliance audit or, if warranted, may seek civil penalties, administrative actions, sanctions, or referrals to the Justice Department. While State oversees these activities, the department primarily depends on U.S. Customs for many enforcement efforts. U.S. Customs, in turn, has various mechanisms to ensure that exporters meet regulatory requirements. For example, U.S. Customs examines export documentation, specifically the Shipper’s Export Declaration. U.S. Customs inspectors may perform a physical inspection of an export crossing the border, and its agents investigate potential export violations. In addition, the Department of Justice can prosecute exporters who are suspected of violating export control laws. We identified a number of limitations for the compliance and enforcement process related to the Canadian exemption. For example, U.S. Customs inspectors are not assured that they are receiving all export declarations as required nor are declarations always complete or accurate when they are submitted. Inspectors at the ports we visited noted that exporters often provide vague descriptions of what they are exporting, which makes it difficult to determine whether it is a defense item subject to the Canadian exemption. In addition, we also found that physical inspections on exports are limited. In fact, U.S. Customs officials said that they inspect less than 1 percent of exports. These limitations are attributed to a lack of information and resources and competing demands within U.S. Customs, which include interdiction of illicit drugs, illegal currency, and stolen vehicles, and since September 11, 2001, terrorism prevention. Of some 7,500 inspectors, about 400 are assigned to export enforcement activities at 301 ports. One port we visited had resorted to “borrowing” port staff from inbound operations inspections to inspect items being exported. Another port had only one person dedicated full time to export activities. When that inspector was not on duty, no one at the port inspected exports. According to Customs inspectors, staffing limitations make it extremely difficult for them to examine export declarations that cite the Canadian exemption. Inspectors are to perform several time-consuming tasks to ensure proper use of the exemption. For example, inspectors said that they should verify that an exporter is registered with State by querying U.S. Customs’ Automated Export System. They should check whether a company has a record of prior export violations by searching the Treasury Enforcement Communications System database. And they should verify that the item cited on the export declaration is eligible for exemption by reading the International Traffic in Arms Regulations or consulting with the U.S. Customs’ Exodus Command Center. These tasks may be especially difficult to complete at land ports since declarations are presented at the time of crossing. After the terrorist attacks of September 11, 2001, the U.S. Customs Commissioner stated that terrorism prevention had replaced drug interdiction as the agency’s top priority. U.S. Customs subsequently redeployed nearly 100 inspectors to increase security along the U.S.- Canadian border. On December 10, 2001, a new program called Project Shield America was launched, focused on preventing international terrorist organizations from obtaining sensitive U.S. technology, weapons, and other equipment that could help carry out attacks on America. Some inspectors we spoke with said that after the September 11 terrorist attacks, U.S. Customs increased coverage along the northern border by realigning inspectors, temporarily employing National Guardsmen, and increasing inspectors’ overtime. However, these inspectors are primarily focused on passengers entering and exiting the country, rather than inspections of defense exports. U.S. Customs inspectors do not have updated guidance from Customs headquarters that would enable them to conduct inspections effectively. U.S. Customs developed and distributed its primary guidance to inspectors in 1993. This guidance provides an overview of U.S. export laws and regulations, including information on State licensing requirements and a synopsis of the Canadian exemption. U.S. Customs has also recently prepared and distributed a memorandum addressing the May 2001 Canadian exemption requirements. However, the 1993 guidance and the recent memorandum do not discuss inspection techniques for identifying questionable exports. A draft update prepared in 1999 provides some inspection guidance, but it has not been finalized or distributed to inspectors. Some inspectors said that this draft could be useful but had insufficient information when inspecting shipments at land ports. U.S. Customs headquarters and Justice Department officials told us that it is difficult to investigate and prosecute violations of export control laws. In particular, prosecution of export violations under the exemption are difficult because it is hard to obtain evidence of criminal intent— especially since the government does not always have the documents to demonstrate the violation of the exemption, such as the Shipper’s Export Declaration. Even with the documents, some U.S. Customs agents told us that cases involving the Canadian exemption normally involved undercover operations to obtain evidence of criminal intent, and these cases often took a long time to complete. The United States and Canada have had a long history of exporting items under a licensing exemption, and both countries have said the exemption is beneficial for facilitating defense trade and advancing mutual defense. However, when the U.S. government found that some exports under the Canadian exemption were being diverted to countries of concern, the United States and Canada had to come to the negotiating table and reach agreement, making sure that they could balance achieving compatibility of their export control systems with maintaining national sovereignty over export control laws and regulations. Based on the experience with the Canadian exemption, the United States will likely need to address three areas when negotiating and executing similar exemptions with other countries. First, upfront agreement is needed on such issues as what items are to be controlled and who can have access to these controlled items. Second, the U.S. government needs to monitor agreements to assess their effectiveness and ensure that unanticipated problems have not arisen. Third, enforcement mechanisms need to be in place to monitor exporters’ compliance with the exemption and enable prosecution of violators. First, countries need upfront agreement on a number of key issues. Agreement on what defense items to control. For example, Canada did not control the same defense items that the U.S. government controlled. This included radiation-hardened microelectronic circuits and nuclear weapons design and test equipment, which could be exported from Canada without a Canadian license. Countries need to have the same starting point for controlling items so that enforcement efforts could be concentrated on the same items. Agreement on what types of items, including technical data and defense services, could be exported under the exemption. Items excluded from the exemption would require licenses. For example, during the Canadian exemption negotiations, discussions centered on whether Missile Technology Control Regime Items could be exported under the exemption or required a license. Agreement on who can have access to controlled articles, technical data, and services, and whether items exported under the exemption can be sent to dual nationals and temporary workers and still be compliant with the laws of both countries. When negotiating the Canadian exemption, this discussion centered on whether dual nationals and temporary workers could have access to U.S.-controlled items and what type of system, such as an exporter registration system, needed to be established to identify who has access to controlled items. Resolving conflicts between the export regulations and legal requirements of each country. For example, U.S. law requires that U.S. government approval is needed before controlled items can be re-transferred within a country or re-exported to another country. U.S. government officials said that unauthorized re-exports were the major reason for the negotiations with Canada. The applicability of U.S. export control law to U.S. defense items that are incorporated into products that are made in another country. In the U.S.- Canadian negotiations, discussions centered on how far-reaching U.S. export control requirements are once U.S. items are incorporated in foreign products and then re-exported. Second, the Canadian exemption experience shows that once agreements have been reached, the U.S. government needs to periodically evaluate the exemption to assess the effectiveness of agreed upon measures and ensure that unanticipated problems do not arise. The U.S. and Canadian governments spent over 2 years negotiating a new exemption and are now working on implementation issues. For example, under new provisions, the Canadian government established a registration system to reduce the risk of transfer to unauthorized individuals and facilitate the Canadian defense industry’s access to U.S.-controlled items. Questions remain, however, about how it will be implemented and who needs to be registered. U.S. government officials said that verification of the registrant is key for compliance and enforcement activities. Finally, enforcement mechanisms need to be in place to ensure export compliance with the exemption. As discussed earlier, U.S. Customs inspectors are not always assured that exporters are submitting required export documentation or that the documentation is complete and accurate, which limits their enforcement efforts. The Department of Justice, in a letter to State, echoed this concern regarding negotiations for additional exemptions and also stated that foreign law enforcement cooperation is needed to provide evidence for successful prosecutions. Resource constraints also create challenges for the law enforcement community. In the end, establishing criteria and lessons learned from current experiences would assist in evaluating whether on-going and future negotiations are successful or if additional issues need to be addressed. State officials acknowledged that there are lessons to be drawn from the Canadian experience. The Canadian exemption relies on exporters to comply voluntarily with export regulations and to disclose when they have not followed those regulations. As such, a system of effective checks and balances is needed to maximize the U.S. government’s assurance that defense items are being appropriately safeguarded. This includes making sure that exporters have sufficient guidance to enable them to make the right decisions and that exporters, in turn, provide required information to the U.S. government for oversight and enforcement efforts. It also includes making sure that enforcement mechanisms work as effectively as possible. Extending exemptions to other countries may aggravate problems if the U.S. government does not learn from its experiences. New exemptions may increase the risk of exporters misinterpreting the regulations and create additional opportunities for exporters to inconsistently apply the exemption. In addition, broadening the exemption could further exacerbate enforcement efforts for an already overburdened law enforcement agency. Accordingly, the U.S. government can benefit from the lessons learned from U.S.-Canadian negotiations when extending similar exemptions to other countries. To enhance the exemption process, we recommend that the secretary of state direct the Office of Defense Trade Controls to review guidance and licensing officer training to improve clarity and ensure consistent application of the exemption. The State Department should also direct the Office of Defense Trade Controls to provide this guidance to U.S. Customs Service for dissemination to field inspectors and agents so that consistent information about the exemption is provided to exporters. To strengthen enforcement activities, we recommend that the commissioner of the U.S. Customs Service assess the threat of illegal defense exports at all ports along the northern border and evaluate whether reallocation of its inspectors, additional training, or other actions are warranted to augment the capability of inspectors to enforce export regulations. We also recommend that U.S. Customs update, finalize, and disseminate its guidance on defense export inspection requirements to all inspectors. To facilitate future country exemption negotiations, we recommend that the secretary of state work with the Department of Justice and U.S. Customs Service to assess lessons learned from experience with the Canadian exemption and ensure that these are incorporated in any future agreements. In written comments on a draft of this report, State generally concurred with our assessment that a large part of the compliance and enforcement process under the Canadian exemption relies on the actions of the exporters. In response to our recommendation on guidance and training, State said it will continue its on-going update of its exporter guidance and its training programs. State provided a number of examples of the types of guidance and training it plans to continue to provide exporters on the Canadian exemption. As part of its efforts to update, we believe State should still assess whether its guidance and training are clear and commonly understood by those who need to use them. In concurring with our recommendation on assessing lessons learned, State said it will continue to work closely with U.S. law enforcement agencies to assess lessons from the Canadian exemption to facilitate future country exemption negotiations. However, State did not identify the specific steps it would take to ensure that lessons are actually shared and that knowledge gained will be acted upon in the future. As we stated in our report, enforcement and compliance problems could be exacerbated without full consideration of lessons learned under the Canadian exemption when extending similar exemptions to other countries. State comments are reprinted in appendix IV, along with our evaluation of them. In its written comments, Customs concurred with our recommendations to (1) assess the threat of illegal exports along the northern border and evaluate whether reallocation of resources and other actions are warranted and (2) update, finalize, and disseminate guidance on defense export inspection requirements. Customs said it will complete these actions no later than December 31, 2002. Customs stated that it conducts yearly threat assessments for the entire country and provides training on various issues, but such activities require a commitment of funds. In addition, Customs stated that we did not address the Automated Export System, which it said has assisted the agency in enforcement efforts. Finally, Customs indicated that lack of manpower and funding for enforcement is problematic for enforcing the Canadian exemption or other defense exports. Customs added that new exemptions for other countries will be increasingly difficult to enforce effectively. We did not include a detailed discussion of the Automated Export System because regulations requiring mandatory filing of export declarations through this system had not been finalized. At the time of our review, most inspectors we contacted were not using the automated system for the majority of enforcement functions related to the Canadian exemption. Customs’ comments are reprinted in appendix V. We collected and reviewed selected defense cooperation agreements establishing the special defense relationship between the United States and Canada, developed a regulatory history of the Canadian exemption, and prepared a comparative analysis of the various changes to the Canadian exemption since inception. We also discussed the history and objectives of the exemption with officials at State, the Department of Defense, U.S. and Canadian industry associations, and the Canadian government. To ascertain how exporters use the exemption, we reviewed the Arms Export Control Act and International Traffic in Arms Regulations to understand the rules governing the U.S.-Canadian exemption process. Because there is no centralized database identifying exporters that use the Canadian exemption, we analyzed State’s licensing and registration data for Canada and obtained recommendations from agency officials, industry associations, and others to develop a list of companies that export to Canada. We then selected 12 companies that used the exemption and conducted structured interviews regarding their process and the criteria for exporting under the exemption. These companies represented various small, medium, and large exporters and freight forwarders. We also corroborated information with other U.S. companies and two Canadian industry associations that hosted roundtable discussions for us with 10 Canadian companies. To determine how U.S. government mechanisms for ensuring compliance with export law and regulations operate, we interviewed State and U.S. Customs officials to obtain explanations about their mechanisms. We reviewed State regulations and briefing materials related to the Canadian exemption and U.S. Customs’ draft handbook, standard operating procedures, and training materials. We visited four U.S. Customs ports to observe the inspection process and reviewed seizure case files to determine the nature of noncompliance with the exemption, and another U.S. Customs’ port to discuss enforcement issues with officials in the Office of Special Agents in Charge of Investigations. We also interviewed officials at U.S. Customs headquarters in the Office of Field Operations and Special Investigations’ Exodus Command Center. We analyzed U.S. Customs’ enforcement cases to determine the nature of the noncompliance that led to the change in the April 1999 version of the exemption, along with disclosures of noncompliance with the exemption that were submitted to State by exporters. We also discussed enforcement challenges surrounding the exemption with Justice Department officials. To develop observations about future exemption proposals for other countries, we reviewed issues covered in the Canadian exemption negotiation, prior GAO reports on defense trade and export controls, and other documents related to efforts to obtain similar exemptions with other countries. We also asked senior State officials about lessons learned in the Canadian negotiations that may be pertinent for on-going or future negotiations of similar Canadian-like exemptions with other countries. We requested information and documentation from State related to a number of areas, including the history of the exemption, changes in its scope and reasons for such changes, and issues covered during negotiations that resulted in the May 2001 Canadian exemption. As discussed with your staff, we experienced significant delays in obtaining documents from State. For example, the department took approximately 6 months to provide us with an initial set of 37 documents and an additional 2 months to provide the remaining information that we requested. These delays caused numerous follow-ups with State officials, needlessly occupying time for both State officials and us. More than 90 telephone contacts or E-mails alone pertained to the status of our document request. The delays and lack of State cooperation extended the amount of time needed to respond to this request. We plan to address these issues in a follow-up letter to the secretary of state. We performed our work between October 2000 and February 2002 in accordance with generally accepted government auditing standards. We will send copies of this report to the chairmen and ranking minority members of the Senate Committee on Foreign Relations, the House Committee on International Relations, and the House Committee on Armed Services. We will also send copies to the secretaries of state, defense, treasury, and justice; the commissioner, U.S. Customs Service; and the director, Office of Management and Budget. This report will also be made available on GAO’s home page http: www.gao.gov. If you or your staff have questions concerning this report, please contact me at (202) 512-4841. Others making key contributions to this report are listed in appendix VI. The United States and Canada have demonstrated their mutual cooperation by entering into more than 2,500 agreements and arrangements over the years. The following are selected defense and economic agreements since 1940. In April 1999, State revised its regulations to limit the scope of the Canadian exemption after concluding that some exporters misunderstood the exemption and that items were being improperly exported to Canada and re-exported from Canada to unauthorized destinations. State’s concerns were supported by a summary of nineteen criminal investigations and seizure cases it identified as related to the Canadian exemption. These cases are summarized below: 1. A major U.S. defense company established a manufacturing facility in Canada and exported defense components, technical data, and technical manuals to this facility under the Canadian exemption. The facility assembled components and prepared complete communication systems for export to Pakistan and provided training for the Pakistani army, without obtaining State-required approval for the exports. State had previously denied the export of such systems to Pakistan because of Congressional prohibitions on such transfers. 2. A Canadian company attempted to sell 35 OH-58 U.S.-origin helicopters to undercover agents posing as brokers for the Iraqi government. These helicopters were to be equipped for air-dispensing chemical weapons. They were seized before being exported from Canada. 3. Fifty-eight M-113 armored vehicles originally sold to the Canadian armed forces were exported without State approval, transferred to Europe, and then to Iran. 4. An Iranian intelligence group established a company in Canada and attempted to acquire U.S. Munitions List controlled klystron tubes, which are specifically used for Hawk missile systems. The U.S. government sought extradition, but was denied. The case was eventually dismissed. 5. A Chinese national established a Canadian company and used the Canadian exemption to acquire a focal plane array-long-range infrared camera. The camera was shipped to China from Canada without State approval. The same individual subsequently ordered an additional 400 cameras. As in the first instance, the Chinese national specified that the Canadian exemption could be used. 6. Another Chinese-owned company established in Canada ordered 400 U.S. Munitions List controlled infrared cameras from a U.S. company and stated that the Canadian exemption should be used, although this would have been an inappropriate use of the exemption. 7. A U.S. company received an order for infrared equipment from a Chinese entity. The U.S. company informed the Chinese buyer that such equipment was controlled on the U.S. Munitions List and restricted from export to China. Upon learning this, the Chinese buyer suggested that the export could take place through a Canadian company under the Canadian exemption and then be re-exported to China. 8. A shipment of 356 U.S. Munitions List controlled turbine engine vanes to be used for military aircraft was seized prior to export. The shipment was destined for an Iranian national, located in Canada, who planned to divert the vanes to Iran. 9. A Canadian company ordered U.S. military fiber optic gyroscopes, stating that the items were to be used in Canada. A government investigation established that the company’s owner was a Chinese national, and the gyroscopes, to be obtained using the Canadian exemption, were actually destined for China. Arrests were made and the defendants were eventually convicted. 10. U.S. Munitions List controlled F-18 parts were seized in the United States while in transit from Canada to New Zealand. State had not authorized the re-export. 11. U.S. Munitions List controlled electronic countermeasure equipment was intercepted in the United States when a Canadian company attempted to export this equipment to Malaysia without U.S. export approval. 12. A 3-year investigation uncovered an attempt to ship U.S.-origin items to a subsidiary in Canada and then divert these items to Libya. The items were seized, and an indictment led to a plea agreement. 13. Three U.S. rocket warheads were seized while being shipped from Belgium to Canada. The exporter claimed the Canadian exemption on the export documents, but the exemption did not apply because the shipment was in transit. 14. A Canadian company shipped military vehicles to an Army facility in the United States to test a new classified communications system. After testing, the vehicles were to return to Canada. A Canadian company, rather than the Canadian government, was handling the temporary import of the vehicles. The Canadian company did not seek or obtain a U.S. export license for moving the vehicles back and forth across the border. It was also learned that this Canadian company had shipped communications equipment, which was eventually intercepted and then seized. 15. U.S. Munitions List controlled gas grenades, projectile guns, and projectiles were seized at the border during an attempt to ship them to Canada while claiming the exemption, rather than under a State- approved license. 16. A shipment of U.S. Munitions List controlled computers and related items were intercepted before being exported to the Sudan. The shipment originated in Canada and was seized when transiting through the United States. 17. U.S.-origin armored vehicle spare parts were intercepted when they were shipped from Canada to the Middle East. The shipment was seized when transiting through the United States without appropriate U.S. export authority. 18. U.S. components for a mobile radar system had originally been exported to Canada under the exemption. The radar was then to be exported to Taiwan under a Canadian license. Since the radar was of U.S. origin, State needed to approve the export to Taiwan. However, State approval was not obtained. 19. A U.S.-origin gas turbine engine had been exported to Sweden and returned to the United States for repair. The engine was then sent to Canada under the Canadian exemption for the actual repair work, although the use of the exemption was inappropriate in this case. The engine was seized on its return to Sweden through the United States. The Canadian regulatory exemption is complex and has changed substantially in recent years. The following table highlights changes regarding what is or is not covered under the exemption and reporting or record keeping requirements associated with the exemption. The following are GAO’s comments on the Department of State’s letter dated March 20, 2002. 1. We clarified text to identify when inconsistencies occurred for the export of technical data under the exemption for offshore procurement activities. We added an example to our report showing that some exporters are unclear about when to export technical data and defense services under the May 2001 revision to the Canadian exemption. 2. State indicated that it did not discourage exporters from applying for licenses when the exemption can be used. Such a practice results in State’s already scarce resources having to process additional licenses. In addition, some exporters may be at a competitive disadvantage because they are applying for a license when others may be using the exemption when exporting the same item. 3. State said that the advice it provides to Customs through the referral process does not represent a formal State determination. According to Customs guidance and a Customs headquarters official, Customs considers State’s input as a formal determination and not advice. A decision from State is critical because it may result in a seizure of the export. 4. As State noted, responses to referrals need to be made quickly when items are detained at the port. State further indicated that exporters who believe the commodity has been mischaracterized or not fully understood are encouraged to pursue formal determination through the commodity jurisdiction process. However, the commodity jurisdiction process is time-consuming. Therefore, determinations made through the commodity jurisdiction process would not resolve the need to make quick determinations through the referral process. 5. We did not include a detailed discussion on the Automated Export System because regulations requiring mandatory filing of export declarations through this system had not been finalized at the time of our review. 6. Based on discussions with State officials, we added information to the report on State’s training and outreach efforts. Marion Gatling, Lillian Slodkowski, Delores Cohen, Ian Ferguson, Bob Swierczek, and John Van Schaik also made significant contributions to this report. Export Controls: Reengineering Business Processes Can Improve Efficiency of State Department License Reviews. GAO-02-203. Washington, D.C.: December 31, 2001. Export Controls: Clarification of Jurisdiction for Missile Technology Items Needed. GAO-02-120. Washington, D.C.: October 9, 2001. Defense Trade: Information on U.S. Weapons Deliveries to the Middle East GAO-01-1078. Washington, D.C.: September 21, 2001. Export Controls: State and Commerce Department License Review Times are Similar. GAO-01-528. Washington, D.C.: June 1, 2001. Export Controls: Regulatory Change Needed to Comply with Missile Technology Licensing Requirements. GAO-01-530. Washington, D.C.: May 31, 2001. Foreign Military Sales: Changes Needed to Correct Weaknesses in End-Use Monitoring Program. GAO/NSIAD-00-208. Washington, D.C.: August 24, 2000. Defense Trade: Status of the Department of Defense’s Initiatives on Defense Cooperation. GAO/NSIAD-00-190R. Washington, D.C.: July 19, 2000. Defense Trade: Analysis of Support for Recent Initiatives. GAO/NSIAD-00-191. Washington, D.C.: August 31, 2000. Defense Trade: Identifying Foreign Acquisitions Affecting National Security Can Be Improved. GAO/NSIAD-00-144. Washington, D.C.: June 29, 2000. Foreign Military Sales: Efforts to Improve Administration Hampered by Insufficient Information. GAO/NSIAD-00-37. Washington, D.C.: November 22, 1999. Foreign Military Sales: Review Process for Controlled Missile Technology Needs Improvement. GAO/NSIAD-99-231. Washington, D.C.: September 29, 1999. Defense Trade: Department of Defense Savings From Export Sales Are Difficult to Capture. GAO/NSIAD-99-191. Washington, D.C.: September 17, 1999. Defense Trade: Weaknesses Exist in DOD Foreign Subcontract Data. GAO/ NSIAD-99-8. Washington, D.C.: November 13, 1998. Defense Trade: Status of the Defense Export Loan Guarantee Program. GAO/ NSIAD-99-30. Washington, D.C.: December 21, 1998. Defense Trade: Observations on Issues Concerning Offsets. GAO-01-278T. Washington, D.C.: December 15, 2000. Defense Trade: Data Collection and Coordination on Offsets. GAO-01-83R. Washington, D.C.: October 26, 2000. Defense Trade: U.S. Contractors Employ Diverse Activities to Meet Offset Obligations. GAO/NSIAD-99-35. Washington, D.C.: December 18, 1998. Defense Trade: Contractors Engage in Varied International Alliances. GAO/NSIAD-00-213. Washington, D.C.: September 7, 2000.
To control the export of defense items, the U.S. government requires exporters to obtain a license from the State Department. A license is not required to export many defense items to Canada, currently the only country-specific exemption to the licensing requirement. In May 2000, the U.S. government announced the Defense Trade Security Initiative, which included a proposal to grant Canadian-like export licensing exemptions to other qualified countries. Since the initiative was announced, the State Department has been negotiating such exemptions with the United Kingdom and Australia. Exporters have been implementing the Canadian exemption inconsistently. Moreover, some exporters are interpreting reporting requirements about the use of the exemption differently. The U.S. government has mechanisms in place to reduce the risk of defense items being inappropriately exported, but there are associated limitations. U.S. Customs officials attributed these enforcement weaknesses to a lack of information and resources, including inspectors to staff ports. In addition, there are competing demands on the agency, which include the prevention of terrorism, and the interdiction of illicit drugs, illegal currency, and stolen vehicles. Experience with the Canadian exemption shows that three areas need to be addressed when negotiating and executing license exemptions with other countries. First, there needs to be upfront agreement on such issues as what items are to be controlled, who can have access to controlled items, and how to control these items through each country's respective export laws and regulations. Second, the U.S. government needs to monitor agreements to assess their effectiveness and ensure that unanticipated problems have not arisen. Third, enforcement mechanisms need to be in place to monitor exporters' compliance with the exemption and enable prosecution of violators.
The HUBZone program was established by the HUBZone Act of 1997 to stimulate economic development by providing federal contracting preferences to small businesses operating in economically distressed communities known as HUBZones. The SBA is responsible for administering the program and certifying applicant firms that meet HUBZone program requirements. To be certified, in general, firms must meet the following criteria: 1) the company must be small by SBA size standards; 2) the company’s principal office—where the greatest number of employees perform their work—must be located in a HUBZone; 3) the company must be at least 51 percent owned and controlled by U.S. citizens; and 4) at least 35 percent of the company’s full-time (or full-time equivalent) employees must reside in a HUBZone. As of March 2010, approximately 9,300 firms were listed in the SBA’s Dynamic Small Business database as participating in the HUBZone program. A certified HUBZone firm is eligible for a variety of federal contracting benefits, such as sole source contracts and set-aside contracts. Contracting officers may award a sole source contract to a HUBZone firm if, among other things, the officer does not have a reasonable expectation that two or more qualified HUBZone firms will submit offers and the anticipated award price of the proposed contract, including options, will not exceed $5.5 million for manufacturing contracts or $3.5 million for all other contracts. Once a qualified firm receives a HUBZone contract, the firm is required to spend at least 50 percent of the personnel costs of the contract on its own employees. The company must also represent, as provided in the application, that it will ‘‘attempt to maintain’’ having 35 percent of its employees reside in a HUBZone during the performance of any HUBZone contract it receives. The SBA must ensure that both applicant and participant firms meet and maintain eligibility criteria at the time of application and, if they are granted certification, throughout their tenure in the program. During the application process, firms attest to the authenticity of the information that they submit to the SBA regarding their eligibility. Subsequent to certification, SBA regulations require firms to immediately notify the agency if any material changes occur that affect their eligibility, such as changes to the number of employees residing in a HUBZone or the location of the firm’s principal office. Moreover, certified HUBZone firms competing for government contracts must verify in the government’s Online Representations and Certifications Application (ORCA) that there have been “no material changes in ownership and control, principal office, or the percentage of employee’s living in a HUBZone since it was certified by the SBA.” Firms and individuals who misrepresent their eligibility during the application process or while participating in the program are subject to civil and criminal penalties; decertification from the HUBZone program; or debarment from all federal contracts. The SBA continues to struggle with reducing fraud risks in its HUBZone certification process despite reportedly taking steps to bolster its controls. The agency certified three of our four bogus firms based on fraudulent information, including fabricated explanations and supporting documentation. The SBA lost documentation for our fourth application on multiple occasions, forcing us to abandon our application. Our testing revealed that the SBA does not adequately authenticate self-reported information—especially as it pertains to information regarding whether a firm’s principal office location meets program requirements. For example, for our successful firms, we used the addresses of the Alamo, a public storage facility in Florida, and a city hall in Texas as our principal office locations—locations that a simple Internet search could have revealed as ineligible for the program. While ensuring that a HUBZone applicant’s principal office is legitimately located in a HUBZone is a complicated process, the SBA’s failure to verify principal office locations leaves the program vulnerable to firms misrepresenting the locations of their principal offices and thus, benefits of the program not going to areas that are economically disadvantaged. Figure 1 below shows one of the acceptance letters we received. In contrast to our last test of the HUBZone certification process, the SBA considerably increased the amount of documentation it requested to support each application and its attempts to contact and communicate with the owners we represented in our applications. However, the SBA also increased the amount of time it takes to certify firms and, by all indications, suspended the use of agency processing time guidelines as indicated by an e-mail that we received from an SBA official and information that the agency posted on its Web site. The SBA took at least 7 months to process each of the three applications from our bogus companies that it certified. In our previous test, the SBA certified our firms in as little as 2 weeks, with minimal requests for documentary evidence. SBA’s increased processing times failed to prevent our fraudulent firms from being certified. As we indicated in our March 2009 report, the SBA initiated a process of reengineering the HUBZone program in response to our findings and recommendations. Though we did not assess the effectiveness of the actions that the SBA undertook to strengthen its internal controls, we were still able to exploit those weaknesses in order to obtain program certification for our bogus firms. Specific details about each of our fraudulent applications are reported below. Fictitious Application 1: We received HUBZone certification about 7 months after submitting this application to the SBA. For the principal office location, we used the address of the Alamo, a National Historic Landmark in Texas. We claimed that both the firm’s employees were HUBZone residents. Nearly 3 months after submission, we received an e- mail from the SBA requesting a copy of the HUBZone maps that we used to verify the residency of our employees, birth certificates, copies of tax returns for the last 3 years, corporate documents, and a copy of our firm’s rental agreement and a recent utility bill. We fabricated these documents using publicly available materials and software and submitted them to the SBA. The SBA then requested a copy of the firm’s most recent official payroll records and sought clarification between the number of employees who worked at our firm’s principal office and those who worked off site. We were also required to provide additional payroll records and corresponding banking statements with the line-by-line transactions that supported the payments that we claimed to make to our fictitious employees. After all of the requested information was provided, we were approved for HUBZone certification. Figure 2 provides a timeline highlighting the major interactions that occurred with the SBA during the processing of this application. Fictitious Application 2: The SBA certified this bogus company 14 months after our investigators applied for HUBZone certification. The address we used for our principal office was the same as a rental storage unit in Florida. We claimed the firm was a partnership that employed two individuals who both resided in a HUBZone. To substantiate our firm’s principal address, the agency requested that we submit a lease, a recent utility and telephone bill, and a copy of our firm’s business registration. To verify the firm’s business activity and ownership, the SBA requested copies of our firm’s federal business income tax returns for the last 3 years and birth certificates of the two owners, and a copy of our firm’s partnership agreement. To verify employee information, the SBA requested copies of each of the HUBZone resident employees’ driver’s licenses or voter registration cards, a copy of our firm’s quarterly unemployment tax filings, and certified copies of the firm’s quarterly payroll. SBA also requested tax information and a copy of our firm’s most recent payroll documents, which we fabricated and provided to the SBA. Several months thereafter, our bogus firm was granted HUBZone certification. Fictitious Application 3: After 7 months of processing, SBA approved this bogus firm for HUBZone participation. The address of this firm’s principal office was a city hall in Texas. We indicated that two of the firm’s employees who worked for the bogus firm lived in a HUBZone. Several months after processing our application, the SBA requested documentary evidence of the firm’s location, business activity, ownership, and employee information. After the SBA deemed the fabricated information that we submitted regarding payroll as insufficient to determine our employee information, the agency put our application on hold until we provided further documentation. We then provided SBA with a sworn statement to support information regarding payroll. SBA requested clarification about the frequency that our bogus employees worked from the principal office and granted HUBZone certification soon after. Fictitious Application 4: After 4 months of processing, the SBA withdrew this application after we abandoned it. We abandoned this application because the SBA claimed that it did not receive supplementary documentation that we repeatedly provided. Two months after the initial submission of this application, we followed up with the SBA to inquire about its status. At the point of inquiry, SBA indicated that our application was being assigned to an analyst for processing. Two months after our inquiry, we received a request for supporting documentation that was similar to those we received in our previous applications. We provided the requested information 3 days after receiving the request. Two weeks later, we followed up to confirm receipt of our documents. The SBA indicated that it did not receive the information that we provided, so we resent the information and requested that the agency confirm receipt. Three weeks later, after failing to receive confirmation on the receipt of our documentation, we inquired about the status of our application. Again, the agency told us that it did not receive the documentation and subsequently gave us one day to resubmit it. If not provided, the agency indicated, our application would be withdrawn. We decided to abandon the application and our application was withdrawn from the program. As of March 2010, the SBA has reviewed the status of all 29 firms we referred to it from our prior HUBZone investigations. Since our March 2009 report, these firms have received more than $66 million in federal obligations for new contracts. Not all of these obligations are necessarily improper, and some do not relate to HUBZone contracts. Of the 29 firms, 16 were decertified by the SBA, 8 voluntarily withdrew from the HUBZone program, and 5 were found by the agency to be in compliance with program requirements and remain certified. We did not attempt to verify SBA’s work. And although SBA indicated that firms sometimes come in and out of compliance while in the program, we maintain that the firms represented in the cases that the SBA reviewed and determined to meet HUBZone program requirements were out of compliance at the time of our review. In addition, we found that five decertified firms continued to market themselves, through their Web sites, as HUBZone certified even after the SBA removed them from the HUBZone program. Tables 1 and 2 below show the results of the SBA’s review of the 29 firms we referred from our July 2008 testimony and March 2009 report. We also found that one firm continued to benefit from another SBA program even though it misrepresented its eligibility for the HUBZone program and was decertified by the SBA. This firm, a construction firm that was a part of our recent investigation into fraud and abuse in the SBA’s 8(a) Business Development Program, also had been 8(a) certified while in the HUBZone program. During that investigation, we found that the firm misrepresented its status as a qualified 8(a) firm because it was being controlled by individuals who did not qualify for the program. Because the SBA did not promptly suspend or debar the firm, this firm was able to receive nearly $600,000 in additional noncompetitive 8(a) contracts since our last report. According to SBA officials, SBA has recently proposed debarment for this firm. We briefed SBA officials on the results of our investigation on June 17, 2010. Regarding our proactive testing, SBA officials indicated that it was unreasonable to expect them to have identified our fictitious firms due to the bogus documentation that we included in our applications. For example, SBA officials stated that the submission of false affidavits would subject an applicant to prosecution. SBA officials also stated that competitors may identify fraudulent firms and likely protest if those firms were awarded a HUBZone contract. While competitors may identify some ineligible firms that were awarded contracts, it is SBA’s responsibility to ensure that only eligible firms participate in the HUBZone program. We suggested that SBA conduct Internet searches on the addresses of applicant firms to help validate principal office locations. We also indicated that if SBA had conducted site visits at the addresses of the firms represented in our applications, those applications would have been identified as fraudulent. SBA officials stated that due to resource constraints, they primarily conduct site visits on certified firms that receive large prime HUBZone contracts. Regarding our 29 referred firms, SBA officials stated that debarment has recently been proposed for an additional firm. We suggested that if SBA determines that a HUBZone firm is not eligible for the program, it should consider conducting a review of that firm’s eligibility if that firm is also certified in other SBA programs. SBA agreed with our suggestion. In addition, SBA provided technical comments which we incorporated into our report. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Administrator of the Small Business Administration, interested congressional committees and members, and other interested parties. In addition, this report will also be available at no charge on GAO’s Web site at http://gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Andy O’Connell, Assistant Director; Matthew Valenta, Assistant Director; Lerone Reid, Analyst-In-Charge; Eric Eskew, Agent-In-Charge; Jason Kelly; Barbara Lewis; Jeff McDermott; and Timothy Walker.
The Small Business Administration's (SBA) Historically Underutilized Business Zone (HUBZone) program provides federal contracting assistance to small firms located in economically distressed areas, with the intent of stimulating economic development. In July 2008 and March 2009, GAO reported on substantial vulnerabilities to fraud and abuse in the HUBZone application and monitoring process. GAO also found 10 HUBZone firms in the Washington, D.C., area and 19 firms in four other metropolitan areas in Alabama, California, and Texas that made fraudulent or inaccurate representations to get into or remain in the HUBZone program. Given the Committee's continued concern over fraud and abuse in the HUBZone program, GAO (1) performed additional proactive testing of SBA's HUBZone certification process, and (2) determined whether SBA has taken any actions against the 29 case study firms GAO identified in its prior work. Using publicly available resources to fabricate documents, GAO proactively tested SBA's application process by applying for HUBZone certification for four bogus businesses with fictitious owners and employees. GAO also interviewed SBA officials and reviewed SBA data about the 29 case study firms. GAO did not attempt to project the extent of fraud and abuse in the program nor systematically assess HUBZone program controls. The HUBZone program remains vulnerable to fraud and abuse. Using falsified documents and employee information, GAO obtained HUBZone certification for three bogus firms using the addresses of the Alamo in Texas, a public storage facility in Florida, and a city hall in Texas as principle office locations. A simple Internet search by SBA could have revealed these as phony applications. While the agency has required more documentation in its application process since GAO's July 2008 report, GAO's testing shows that SBA does not adequately authenticate self-reported information and, for these cases, did not perform site visits to validate the addresses. Further, the changes have significantly increased the time it takes SBA to process applications. Specifically, SBA took 7 or more months to process each of the bogus applications--at least 6 months longer than for GAO's previous investigations. SBA continually lost documentation for GAO's fourth application, and eventually withdrew it after GAO failed to resubmit the same materials for the fourth time. On its Web site, SBA reported that applicants are experiencing delays during the application process. SBA has taken some action on most of the 29 firms that GAO previously reported did not meet HUBZone program requirements. The SBA decertified 16 firms from the HUBZone program, and another 8 firms voluntarily withdrew. While GAO maintains all 29 firms did not meet requirements at the time of its review, SBA stated that the other 5 firms were in compliance at the time of its own review and so remain certified. Since GAO's March 2009 report, 17 of the 29 companies have received more than $66 million in federal obligations for new contracts. GAO recently reported that one firm has also defrauded the SBA 8(a) program. Because the SBA did not promptly debar the firm from federal contracts, it was able to fraudulently receive an additional $600,000 in noncompetitive 8(a) federal contracts since GAO's last report. SBA recently proposed debarring this firm. National Historic Landmark Address (The Alamo) Used by GAO as Principle
Modern agricultural biotechnology refers to various scientific techniques, most notably genetic engineering, used to modify plants, animals, or microorganisms by introducing into their genetic makeup genes for specific desired traits, including genes from unrelated species. For centuries people have crossbred related plants or animal species to develop useful new varieties or hybrids with desirable traits, such as better taste or increased productivity. Traditional crossbreeding, however, can be very time-consuming because it may require breeding several generations to obtain a desired trait and breed out numerous unwanted characteristics. Genetic engineering techniques allow for faster development of new crop or livestock varieties, since the genes for a given trait can be readily introduced into a plant or animal species to produce a new variety incorporating that specific trait. Additionally, genetic engineering increases the range of traits available for developing new varieties by allowing genes from totally unrelated species to be incorporated into a particular plant or animal variety. In the 1970s, scientists learned how to extract a specific gene from a DNA strand and insert this gene into a different organism where it would continue to make the same protein that it did in its original organism. Scientists have applied this technology to bacteria, plants, and animals. For example, as shown in figure 1, scientists produced pest-resistant plants by identifying a gene responsible for pest resistance in an organism, isolating and copying the gene, and then inserting it into the target plant’s DNA. The plant was then tested to determine that the transferred trait (transgene) was inherited in subsequent generations and that the “transgenic” plant grew and functioned as well as the conventional variety. Biotechnology offers a variety of potential benefits and risks. It has enhanced food production by making plants less vulnerable to drought, frost, insects, and viruses and by enabling plants to compete more effectively against weeds for soil nutrients. In a few cases, it has also improved the quality and nutrition of foods by altering their composition. Table 1 summarizes the GM foods evaluated by FDA. Table 1 shows that the majority of modifications have been aimed at increasing crop yields for farmers by engineering a food plant to tolerate herbicides or attacks from pests such as insects and viruses (48 out of 62 modifications). Further, only two food plants have been altered to produce modified oil: the soybean and canola plants. According to industry officials, the modified soybean produces healthier oil. They also stated that the canola plant was modified to have a domestic source for laurate cooking oil. Because soybean oil is the most commonly consumed plant oil worldwide, scientists say that the new oil could significantly improve the health of millions of people. For three key crops grown in the United States—corn, soybeans, and cotton—a large number of farmers have chosen to plant GM varieties. In 2001, GM varieties accounted for about 26 percent of the corn, 68 percent of the soybeans, and 69 percent of the cotton planted in the United States. These crops are the source of various ingredients used extensively in many processed foods, such as corn syrup, soybean oil, and cottonseed oil, and they are also major U.S. commodity exports. The United States accounts for about three-quarters of GM food crops planted globally. However, the use of biotechnology has also raised concerns about its potential risks to the environment and people. For example, some people fear that common plant pests could develop resistance to the introduced pesticides in GM crops that were supposed to combat them. Further, some fear that crops modified to be tolerant to herbicides could foster the evolution of “super weeds.” Finally, some fear that scientists might unknowingly create or enhance a food allergen or toxin. Therefore, as biotechnology was being developed, U.S. scientists, regulators, and policymakers generally agreed that GM plants should be evaluated carefully before being put into widespread use. As a result, the United States published a Coordinated Framework for Regulation of Biotechnology in 1986. This framework outlined the regulatory approach for reviewing GM plants, including relevant laws, regulations, and definitions of GM organisms. Responsibility for implementing the coordinated framework fell primarily to three agencies: USDA, the Environmental Protection Agency (EPA), and FDA. Within USDA, the Animal and Plant Health Inspection Service (APHIS) bears the main responsibility for assessing the environmental safety of GM crops. The primary focus of APHIS’ review is to determine whether or not a plant produced through biotechnology has the potential to harm natural habitats or agriculture. Developers can petition APHIS to exempt a GM plant from regulation once sufficient and appropriate data have been collected regarding the potential environmental impact of a GM plant. To safeguard the environment and human health, EPA is responsible for regulating genetic modifications in plants that protect them from insects, bacteria, and viruses. These protectants are subject to the agency’s regulations on the sale, distribution, and use of pesticides. EPA must review and grant a permit for field-testing plants with such protectants on more than 10 acres of land. Prior to commercialization of a GM plant with such a protectant, EPA reviews the application for approval of the protectant, solicits public comments, and may seek the counsel of external scientific experts. FDA has primary authority for the safety of most of the food supply. The Federal Food, Drug, and Cosmetic Act establishes the standard for food safety as food being in an unadulterated condition. FDA established its basic policy regarding the review of GM foods in its 1992 Policy on Foods Derived from New Plant Varieties. According to this policy, FDA relies on companies developing GM foods to voluntarily notify the agency before marketing the foods. Notification leads to a two-part consultation process between the agency and the company that initially involves discussions of relevant safety issues and subsequently the company’s submission of a safety assessment report containing test data on the food in question. At the end of the consultation, FDA evaluates the data and may send a letter to the company stating that the agency has no further questions, indicating in effect that it sees no reason to prevent the company from marketing the GM food. In 1997, FDA supplemented its 1992 Policy with the current Guidance on Consultation Procedures, clarifying procedures for the initial and final consultations. In January 2001, FDA issued a proposed rule in the Federal Register that provides further information on these procedures and, more importantly, would require pre-market notification by companies. Among the reasons that FDA cited for this change are concerns expressed by consumers and public interest groups about the limited transparency and voluntary nature of the current process. FDA also pointed to the growing power of biotechnology to create potentially more complex safety issues that could require more stringent regulatory evaluations. FDA, tentatively, expects to finalize this rule as early as fiscal year 2003. All foods, including those from GM plants, pose the same types of inherent risks to human health: they can cause allergic or toxic reactions, or they can block the absorption of nutrients. Although some foods from GM plants have contained allergens, toxins, and antinutrients, scientists agree that the levels of these compounds have been comparable to those found in the foods’ conventional counterparts. To reach such a finding, each GM food is evaluated using a regimen of tests. This regimen begins with tests on the source of the gene being transferred, proceeds to tests examining the similarity of the GM food to conventional varieties with known allergens, toxins, and antinutrients, and may include tests on the safety of the modified protein from the GM food in simulated digestive fluids. At every phase, test results are compared to the risk levels found in the food’s conventional counterpart. If the risk levels are within the same range as those for the conventional food, the GM food is considered as safe as its conventional counterpart. Despite the limitations of individual tests, several experts agree that this regimen of tests has been adequate for ensuring the safety of GM foods. According to reports from the Organization for Economic Cooperation and Development, the Codex Alimentarius, and FDA, foods from GM plants pose three types of risk to human health: they can potentially contain allergens, toxins, or antinutrients. These risks are not unique to GM foods. People have consumed foods containing allergens, toxins, and antinutrients throughout human history. The small percentage of the population with food allergies (1-2 percent of adults and 6-8 percent of children) tries to prevent allergic reactions by avoiding offending foods. Additionally, people commonly consume toxic substances in foods, but they usually do so at levels that are considered safe. People also frequently consume foods containing antinutrients, such as certain proteins that inhibit the digestion of nutrients in the intestinal tract, but common food preparation techniques, such as cooking, break down the antinutrients. Moreover, consumption of a varied diet, in which a person is exposed to multiple nutrient sources, mitigates the risk of malnutrition from antinutrients, according to FDA officials and various academicians. Because conventional foods contain allergens, toxins, and antinutrients, scientists recognize that food cannot be guaranteed to pose zero risk. The primary concern with the genetic modification of food with respect to human health, state industry officials, is the potential for unintentional introduction of a new allergen, an enhanced toxin, or an enhanced antinutrient in an otherwise safe food. For this reason, developers evaluate GM foods to determine if they are as safe as their conventional counterparts. An allergic reaction is an abnormal response of the body’s immune system to an otherwise safe food. Some reactions are life threatening, such as anaphylactic shock. To avoid introducing or enhancing an allergen in an otherwise safe food, the biotech food industry evaluates GM foods to determine whether they are “as safe as” their natural counterparts. For example, in 1996 FDA reviewed the safety assessment for a GM soybean plant that can produce healthier soybean oil. As part of a standard safety assessment, the GM soybean was evaluated to see if it was as safe as a conventional soybean. Although soybeans are a common food allergen and the GM soybean remained allergenic, the results showed no significant difference between its allergenicity and that of conventional soybeans. Specifically, serums (blood) from individuals allergic to the GM soybean showed the same reactions to conventional soybeans. A toxic reaction in humans is a response to a poisonous substance. Unlike allergic reactions, all humans are subject to toxic reactions. Scientists involved in developing a GM food aim to ensure that the level of toxicity in the food does not exceed the level in the food’s conventional counterpart. If a GM food has toxic components outside the natural range of its conventional counterpart, the GM food is not acceptable. To date, GM foods have proven to be no different from their conventional counterparts with respect to toxicity. In fact, in some cases there is more confidence in the safety of GM foods because naturally occurring toxins that are disregarded in conventional foods are measured in the pre-market safety assessments of GM foods. For example, a naturally occurring toxin in tomatoes, known as tomatine, was largely ignored until a company in the early 1990s developed a GM tomato. FDA and the company considered it important to measure potential changes in tomatine. Through an analysis of conventional tomatoes, they showed that the levels of tomatine, as well as other similar toxins in the GM tomato, were within the range of its conventional counterpart. Antinutrients are naturally occurring compounds that interfere with absorption of important nutrients in digestion. If a GM food contains antinutrients, scientists measure the levels and compare them to the range of levels in the food’s conventional counterpart. If the levels are similar, scientists usually conclude that the GM food is as safe as its conventional counterpart. For example, in 1995 a company submitted to FDA a safety assessment for GM canola. The genetic modification altered the fatty acid composition of canola oil. To minimize the possibility that an unintended antinutrient effect had rendered the oil unsafe, the company compared the antinutrient composition of its product to that of conventional canola. The company found that the level of antinutrients in its canola did not exceed the levels in conventional canola. To ensure that GM foods do not have decreased nutritional value, scientists also measure the nutrient composition, or “nutrition profile,” of these foods. The nutrient profile depends on the food, but it often includes amino acids, oils, fatty acids, and vitamins. In the example previously discussed, the company also presented data on the nutrient profile of the GM canola and concluded that the significant nutrients were within the range of those in conventional canola. Companies that may wish to submit new GM foods for FDA evaluation perform a regimen of tests to obtain safety data on these foods. FDA’s 1992 policy on safety assessments of GM foods describes the data the agency recommends it receive to evaluate these foods. Figure 2 provides an example of the regimen of tests. This regimen usually includes an analysis of the source of the transferred genetic material, specifically whether the source of the transferred gene has a history of causing allergic or toxic reactions or containing antinutrients; the degree of similarity between the amino acid sequences in the newly introduced proteins of the GM food and the amino acid sequences in known allergens, toxins, and antinutrients; data on in vitro digestibility (i.e., how readily the proteins break down in simulated digestive fluids) the comparative severity of individual allergic reactions to the GM product and its conventional counterpart as measured through blood (serum) screening—when the conventional counterpart is known to elicit allergic reactions or allergenicity concerns remain; and data on any changes in nutrient substances, such as vitamins, proteins, fats, fiber, starches, sugars, or minerals due to genetic modification. Occasionally, the regimen of tests also includes animal studies for toxicity. As shown in figure 2, the tests provide evidence at key decision points to direct which tests are subsequently performed. Tests on the source of the newly expressed protein, amino acid sequence similarity, and digestibility are typical for both allergenicity and toxicity assessments, while serum screening is used only for allergenicity assessment. Also, while the complete regimen is not necessary for every GM food safety assessment, companies often perform extra tests in the regimen to corroborate the results of previous tests. Using allergenicity as an example, if a company transfers a gene from a source that is not an allergen, the company evaluates the amino acid sequence of the GM protein. If the GM protein has an amino acid sequence similar to that of known allergens, the company initiates further, more specific allergenicity testing. The company would undertake in vitro digestibility tests to see if the GM protein was broken down in simulated digestive fluids. If there were any concerns about the speed with which the GM protein was broken down, the company would use serum-screening tests to support or refute the results of the digestibility tests when serums are available. If the serum screening yields results showing that the GM protein does not react with antibodies in serum, then the company concludes the GM protein does not raise allergenicity concerns. The results from this regimen of tests provide the weight of evidence necessary to determine the safety of a GM food. Examining the source of the transferred genetic material is the starting point in the regimen of tests for safety assessments. According to a scientist from a biotechnology company, two principles of allergenicity assessment underlying the regimen of tests contribute to adequate safety assessments: scientists (1) avoid transferring known allergenic proteins and (2) assume all genes transferred from allergenic sources create new food allergies until proven otherwise. If the source contains a common allergen or toxin, industry scientists must prove that the allergenic or toxic components have not been transferred. However, as a practical matter, biotechnology companies repeatedly state that if the conventional food is considered a major food allergen, they will not transfer genes from that source. Accordingly, experts from FDA and the biotechnology industry agree that the probability of introducing a new allergen, enhancing a toxin, or enhancing an antinutrient is very small. The next step involves a comparison between the amino acid sequences of the transferred proteins of the GM food plant and those of known allergens, toxins, or antinutrients. If scientists detect an amino acid sequence in a GM food identical or similar to one in an allergen, toxin, or antinutrient, then there is a likelihood that the GM food poses a health risk. Overall, sequence similarity tests are very useful in eliminating areas of concern and revealing areas for further evaluation. In vitro digestibility tests are a primary component of all GM food safety assessments. These tests analyze the breakdown of a GM protein in simulated human digestive or gastric fluids. The quick breakdown of a GM protein in these fluids indicates a very high likelihood that the protein is not allergenic or toxic. Safe dietary proteins are almost always rapidly digested, while allergens and toxins are not. If a gene raises allergenicity concerns, a company can include serum screening tests in its safety assessment of a GM food. Serum screening is used only for allergenicity assessment. Serum screening involves evaluating the reactivity of antibodies in the blood of individuals with known allergies to the plant that was the source of the transferred gene. Antibody reactions suggest the presence of an allergenic protein. Serum screening tests are valuable because they can expose allergens whose presence was only suggested in amino acid sequence similarity tests. Since there are neither abundant, appropriate stored serums nor many suitable human test subjects, these tests cannot always be used. Scientists also create a nutritional and compositional profile of the GM food to assess whether any unexpected changes in nutrients, vitamins, proteins, fibers, starches, sugars, minerals, or fats have occurred as a result of the genetic modification. While changes in these substances do not pose a risk of allergenicity, toxicity, or antinutrient effects to human health, creating a nutritional and compositional profile further ensures that the GM food is comparable to its conventional counterpart. Biotechnology companies occasionally use animal studies to confirm the results of prior toxicity tests. For the most part, these studies have involved feeding extraordinarily high doses of the modified protein from a GM food to mice. The doses of the modified protein are often hundreds to thousands of times higher than the likely dose from human diets. Scientists perform these studies to determine if there are any toxic concerns from the GM food. Animal studies also have the potential to predict allergenicity in humans, although scientists have not yet identified an animal that suffers from allergic reactions the same way that humans do. The brown Norway rat has provided the closest approximation to human allergic reactions to several major food allergens. However, animal models—as predictors of allergenic responses in humans—are not scientifically accepted at this time. Biotechnology experts whom we contacted from a consumer group, FDA, academic institutions, research institutions, the European Union and biotechnology companies said that the current regimen of tests has been adequate for assessing the safety of GM foods. All but one expert considered the regimen of tests to be “good” or “very good” for ensuring the safety of GM foods for public consumption, and the remaining expert viewed the tests as “fair.” While the experts noted that individual tests have limitations, most experts agreed that results from the regimen of tests provide the weight of evidence needed for scientists to make an accurate assessment of risk. A distinction made by an academician and regulatory officials is that the available tests do not guarantee absolute safety of GM foods, but comparable safety. There is no assurance that even conventional foods are completely safe, since some people suffer from allergic reactions, and conventional foods can contain toxins and antinutrients. Because they have been consumed for many years, though, conventional foods are used as the standard for comparison in assessing the safety of GM foods, and experts note that the available tests are capable of making this comparison. While experts agree that the available regimen of tests is adequate for safety assessments, there are limitations to individual tests. For example, there are limitations to the acceptability of amino acid sequence similarity test results, in part because there is not agreement on what level of amino acid similarity indicates a likelihood of allergenicity and, therefore, the need for additional testing. Industry scientists assert that as long as amino acid sequences in a protein are less than 50 percent identical to those in known allergens, then the protein should not raise concerns. On the other hand, a scientist associated with a consumer group, as well as a report from the United Nations’ Food and Agriculture Organization, believe a more conservative level, such as less than 35 percent identical, is appropriate. Thus, experts from industry and consumer groups suggest that reaching agreement on this parameter would increase the consistency with which these tests are applied. In vitro digestibility tests also have limitations because they can yield inaccurate results when performed under inappropriate parameters, such as improper digestive fluid pH levels. If a GM food protein is tested at a pH level representative of intestine digestion, yet the protein in real life is digested at a different pH level in the stomach, then the results of the test are not valid for reaching conclusions on the GM food’s likely effect in humans. FDA officials note that there is growing acceptance that the proper pH level for digestive stability tests is the pH level of the human stomach. As a result, experts from industry and consumer groups suggest that reaching agreement on the parameters in digestive stability tests— such as proper pH ranges—would help ensure that they are performed properly. Information on acceptable testing procedures (including parameters) is available from a variety of sources. For instance, AOAC Internationaldocuments standardized tests and test procedures, such as test procedures for examining nutrient levels in a GM food. Other groups, such as the American Oil Chemists’ Society and the American Association of Cereal Chemists also have information on official tests and test procedures. However, there is no centralized source of information on these procedures. Although FDA maintains a Web site with guidance for consultations, the Web site does not contain information about acceptable testing procedures. According to FDA, it has the necessary controls to ensure it obtains the safety data needed for its GM food evaluations. In examining a selection of submissions, we found that companies adhered to FDA’s recommended procedures for the type of data to be submitted. However, biotechnology experts state that the agency’s overall evaluation process could be enhanced by randomly verifying the test data that companies provide and by increasing the transparency of the evaluation process—including more clearly communicating the scientific rationale for the agency’s final decision on GM food safety assessments. FDA believes that making these changes would enhance the public’s confidence in the agency’s evaluation process. According to agency officials, FDA has several management practices that, in aggregate, constitute internal controls. The officials state that these practices effectively ensure FDA obtains the data necessary for evaluating the potential risks of GM foods. These practices include: communicating clearly what safety data are important to FDA’s evaluations of GM food safety, having teams of FDA experts representing diverse disciplines perform the evaluations, and tailoring the level of evaluation to match the degree of each GM food’s novelty. One key indication of the effectiveness of these practices is FDA’s ability to determine when data are inadequate and to specify the additional data important to a complete evaluation. In the cases we examined when the company’s initial submission of data was insufficient, FDA was able to specify and obtain additional data from the company. For a GM food, the evaluation process, known as a consultation, generally lasts between 18 months and 3 years, according to FDA officials. In what FDA calls the “initial” phase of the consultation, FDA and company officials discuss what safety data will be needed for a GM food submission. In the next or “final” phase, the company prepares a detailed report summarizing this data and submits it to FDA. After receiving and evaluating the report, FDA officials prepare a “memo to file.” This memo is the formal document in which FDA summarizes and evaluates everything the company has submitted. Consultation is complete when FDA determines that it has no further questions regarding the safety of the GM food and informs the company of this conclusion in a letter signed by the director of the FDA’s Office of Food Additive Safety. Receiving such a letter is generally helpful to companies in marketing their product. In FDA’s 1992 policy statement and its subsequent 1997 guidance, the agency clearly states what information companies should submit for FDA to assess the safety of GM foods. Specifically, the 1992 statement includes several risk assessment decision trees that provide a step-by-step approach to testing. FDA recommends that companies follow this approach in their assessments of GM foods. Using this approach, companies must show whether any allergens, toxins, or antinutrients have been introduced or enhanced. FDA’s 1997 guidance builds upon the 1992 policy statement by describing in more detail the process, procedures, and time frames pertaining to the initial and final consultations. FDA officials stated that the principles embodied in their 1992 policy statement guided the consultations for the 50 GM foods evaluated so far and that companies have closely adhered to these principles. In examining five submissions, we found that companies adhered closely to the 1992 policy statement. For example, a 1996 submission for a GM soybeanshows step-by-step adherence to the allergenicity decision tree established in the 1992 policy statement. Extensive data submitted by the company enabled FDA to conclude that it had no unanswered questions about the safety of the soybean. Later submissions involving an herbicide-tolerant sugar beet and pest-resistant corn also showed a close adherence to the 1992 policy statement. Evaluations of GM food safety submissions must include concurrence from every member of a highly qualified team known as the Biotechnology Evaluation Team. The 1997 guidance states that the evaluation teams generally will be composed of a consumer safety officer (who serves as the project manager), molecular biologist, chemist, environmental scientist, toxicologist, and nutritionist. The guidance also states that the evaluation teams may be supplemented with additional expertise on a case-by-case basis. According to agency officials, these experts are qualified to perform what is effectively a peer review of each submission. Consumer safety officers, who generally have doctorates in relevant disciplines, including molecular biology, cell biology, or immunology, chair the teams. According to FDA officials, in addition to their scientific credentials, the consumer safety officers know what is needed for the administrative record for each submission. This knowledge encompasses the laws and regulations, such as the Federal Food, Drug, and Cosmetic Act, as well as specific pertinent procedures, such as FDA’s 1992 policy statement. According to FDA officials, the combination of scientific and administrative expertise makes the consumer safety officers effective leaders of the teams. FDA officials indicated that each member of an evaluation team reviews the entire file for a given GM food submission. These officials viewed this as another strength of the evaluation process. In particular, they stressed that the final evaluation is not a “piecemeal” evaluation in which, for example, the toxicologist receives only the toxicological data to review. Rather, each team member receives and examines all the data that the company has submitted. Further, team members must document in writing the results of all key interactions with a company throughout the course of the evaluation; this documentation is then available for the whole team to evaluate. Lastly, the entire team must concur with the final draft of the memo to file, which is usually prepared by the consumer safety officer. In summary, FDA officials told us that the expertise of the Biotechnology Evaluation Team members coupled with the multiple reviews of information enables the team to adequately evaluate safety assessments and determine if and when more data is needed. According to agency officials, FDA’s practice of varying its level of evaluation based on the degree of novelty of the GM food submission allows it to devote resources where they are most needed, thus assuring that Biotechnology Evaluation Teams have time to obtain necessary safety data. FDA’s evaluation of one company’s GM tomato provides an example of a detailed evaluation of a novel submission that went through both the initial and final consultations. Specifically, the Biotechnology Evaluation Team requested extensive detail from the company on the modification of the tomato, which involved the insertion of one gene to delay ripening and another gene to show that this trait was transferred. FDA’s documentation of its evaluation presented background information on these modifications, a point-by-point evaluation of the company’s food safety assessment, and FDA’s conclusion that the tomato was not significantly different from conventional tomatoes. By contrast, FDA officials stated that evaluations of company submissions for GM foods similar to GM foods previously evaluated by the agency (such as a virus-resistant squash and various herbicide-tolerant corns) required fewer agency resources because these submissions skipped the initial consultation and proceeded to the final consultation. In fact, FDA’s 1997 guidance states that a company might skip the initial consultation and go directly to the final consultation by submitting its final report. According to FDA officials, this skipping often occurs when a company has made multiple submissions for similar GM foods involving only minor variations from one case to the next. Having once gone through the full consultation process for a specific genetic modification, such a company is familiar with the kinds of safety information that FDA expects and thus can proceed directly to preparing a final report for similar cases. FDA’s documentation of its evaluation of such submissions can be less detailed. According to FDA officials, in cases in which the agency determines that the data submitted by a company are insufficient, the company has always cooperated with FDA by performing additional tests and/or submitting the data needed. FDA officials described three types of situations where they have requested additional data and companies have responded: (1) the absence of a reliable or “validated” method for performing a test; (2) reliance on a prevailing scientific “assumption” that, when tested at FDA’s request, was proven incorrect; and (3) inconsistent or incomplete data in the final reports. The first situation involved the lack of a reliable method for testing tomatine, a naturally occurring toxin in tomatoes. The company that encountered this problem was inexperienced in analytical chemistry, and the laboratory with which it was working did not have an acceptable method. In evaluating the measurements of tomatine submitted by the company, FDA officials found these data unconvincing. As a result, FDA officials suggested that the company find a more appropriate method. In response, the company obtained a suitable method from another laboratory and later provided FDA with new data that the agency found convincing. The second situation is illustrated by FDA’s evaluation of a GM tomato altered to delay ripening. In this submission, the company assumed that only a certain segment of DNA was transferred. FDA asked the company to prove the accuracy of this assumption. Testing by the company then revealed that additional DNA had been transferred. This discovery led to more thorough analysis of the genetic modifications, including additional efforts to ensure that the transfer of extra DNA did not cause unintended changes. In the third situation, FDA noted discrepancies in the data in final reports involving GM cotton, rice, and canola and requested the relevant companies to correct the information, which they did. Biotechnology experts state, and FDA agrees, that its overall evaluation process for assessing the safety of GM foods could be enhanced by verifying the GM food-related test data that companies provide, and increasing the transparency of the evaluation process. Biotechnology experts from consumer groups and academia state that FDA’s evaluation process could be enhanced if the agency validated companies’ test results on proposed GM products by reviewing raw data (e.g., the actual, unverified test results). Further, FDA believes that occasional reviews of the raw data developed by companies would further enhance the credibility of, and public confidence in, the overall safety data that companies submit. In addition, we believe occasional data verification by a federal agency is necessary to (1) identify the risk of the agency’s receiving faulty data from external sources and (2) ensure that no one agent is allowed to control every key aspect of a safety assessment. FDA officials stated that they do not believe it is necessary for the agency to routinely review raw data for two reasons. First, the risk of incurring criminal penalties for deliberately submitting false data to FDA provides a significant degree of deterrence. Second, FDA’s evaluation process constitutes a peer review of the safety data that will generally detect any problems. However, these officials added that an occasional review of raw data, performed on a random basis, would further help ensure the reliability of FDA’s evaluation of these foods, and thus enhance public confidence in the agency’s evaluation process. Officials from a major biotech company described three types of GM food safety data developed for each submission and available for FDA’s review: (1) raw data, (2) refinements and comprehensive interpretations of the raw data, and (3) summaries of these interpretations. According to these officials, FDA has reviewed the summaries, and in some instances the comprehensive interpretations, but has not reviewed the raw data. These officials note, and FDA officials concur, that nothing prevents FDA from reviewing these raw data. In general, these raw data are readily available from companies. The company officials also note that EPA has occasionally reviewed raw data in its safety assessments of GM plants regarding their environmental effects. Moreover, FDA officials stated the agency reviews raw data in its safety assessments of new drug applications. Experts from consumer groups and academia have stated that the transparency of the agency’s evaluation process for GM foods could be enhanced if FDA described more clearly the scientific rationale for its safety decisions in its memo to file. FDA agrees. Guidelines issued by the Office of Management and Budget on the quality of information disseminated by federal agencies state that transparency is important in reviews of technical information and that these reviews should be conducted in an open and rigorous manner. Yet critics have stated that FDA’s current memos to file do not adequately communicate the scientific rationale for the decisions. Some consumer groups have pointed out the brevity of some of the memos and described them as “perfunctory” summaries of company data that provide little or no insight into FDA’s evaluation of the data. Likewise, the Council for Agricultural and Science Technology, a group of universities and companies established to provide a more scientific basis for analyzing and prioritizing agricultural issues, stated that FDA does not adequately clarify in its memos to file the basis for its decisions on GM food submissions. Our review of memos to file for the 50 GM food products evaluated by FDA as of April 2002 confirms that these memos do not clearly explain the scientific rationale for FDA’s decisions. In response to these concerns, FDA officials note that the memos to file had originally been created for FDA’s internal use rather than as public documents. Thus, they were not designed to provide detailed rationales of FDA’s decisions on GM food submissions. In addition, FDA officials said that some memos are brief because they record decisions on GM foods that are very similar to previously evaluated GM foods. However, FDA officials acknowledge that FDA could do more to inform the public of the basis for their decisions. For example, FDA could include comments in the memos to file that better reflected the context of the evaluation (for instance, its similarity to previous evaluations), the adequacy of the tests performed by the company, and the level of evaluation provided by FDA. For those memos to file on submissions for GM foods that are similar to GM foods previously evaluated, FDA could make reference to earlier, similar submissions having a more detailed memo to file. Scientists expect future GM foods to include modifications of plant composition that may enhance the nutritional value of these foods but may also increase the difficulty of assessing their safety. While current tests have been adequate for evaluating the small number of relatively simple compositional changes made so far, some scientists believe that new testing technologies under development may be needed to assess the safety of these more complex GM foods. Scientists have diverging views on the potential role of these new technologies: some view them as a useful supplement to existing tests, while others view them as a new, more comprehensive way to assess the safety of all changes in GM foods. However, the lack of technical standards for these new technologies and proof of their reliability prevents their current use. Until now, most genetic modifications of plants have been aimed at increasing or protecting crop yield. These modifications have generally focused on the portions of plants, such as cornstalks, that are not consumed by humans. However, many scientists believe that the current wave of yield-related modifications will expand to include a new wave of genetic modifications involving compositional changes in the foods to enhance their nutritional value. For example, “golden” rice is a GM food under development that was modified to contain beta-carotene, a precursor of vitamin A. Golden rice may help to reduce the incidence of blindness in countries where rice is a dietary staple and malnutrition is common. Also under development are compositional changes that will increase the levels of vitamin E in foods. Plants are the primary source of this vitamin, which is believed to have cancer-preventing properties, but plants generally contain it in relatively low concentrations. A gene controlling vitamin E production was transferred recently to a member of the mustard plant family, which subsequently exhibited a nine-fold increase in this vitamin. According to a recent report, incorporation of this gene into major crops such as soybeans, canola, and corn is probably not far in the future. In addition to increasing nutrients in GM foods, scientists are working to reduce the presence of allergens, toxins, and antinutrients. For example, scientists have genetically modified wheat, one of the major allergenic foods, to stimulate a gene that diminishes wheat’s allergenic properties. Scientists are also seeking ways to reduce toxic substances, such as alkaloids in potatoes, by inserting genes that block their production. Preliminary findings have indicated that GM potatoes produced fewer of these alkaloids. Likewise, some plants, especially cereals and legumes, are nutritious foods but contain varying amounts of antinutrients. Genetic modifications are being explored to reduce these antinutrients. If adopted, FDA’s proposed rulemaking mandating the testing of all GM foods prior to commercialization will represent a timely response to this new wave of GM foods. For example, the preamble to the rule notes that some of the new ingredients in GM foods will significantly differ from ingredients that have a history of safe use. The rule also notes that products derived from this advanced biotechnology will present more complex safety and regulatory issues than those seen to date. The proposed rule concludes that nontraditional strategies for evaluating food safety will become the norm as the use of biotechnology expands. FDA officials explained that “nontraditional strategies” could include new technologies under development such as those described in the next section. Some scientists believe that testing technologies being developed but not yet widely applied to GM foods may be useful in assessing the safety of compositional changes and detecting unintended effects. In contrast to current tests that examine the human health effects of transferred genes and other relevant components on a highly selective basis, the new technologies will examine essentially all of the components—such as DNA, proteins, and metabolites—in conventional and GM plants simultaneously to detect any differences. These new technologies include gene chips that use thousands of droplets of DNA on glass chips to identify gene sequences and determine the expression level or abundance of the genes; proteomics which can analyze up to 100,000 proteins simultaneously; and metabolic profiling that can analyze the 2,000 to 3,000 metabolites in people and 3,000 to 5,000 metabolites in plants. In essence, these new technologies combine huge increases in automated computing power with traditional testing technologies to identify differences between conventional and GM foods in ways that would have been impossible even a few years ago. A university scientist further explained the contrast between the current and new technologies by noting that traditional tests focus on known toxins and nutrients in a “targeted” approach, whereas new technologies use a “non-targeted” approach to increase the chance of detecting unintended effects of genetic modifications such as the creation of a toxin. According to this scientist, the latter approach has particular applicability to second-generation plants with extensive modifications, which may be more likely to have unintended effects. For example, a scientist with a consumer group stated that the new technologies may be useful in detecting unintended effects that traditional tests, such as those for digestibility, are not likely to identify. Other scientists expressed the need for caution and additional information to determine the potential role of these new technologies. Gene chips consist of grids of thousands of droplets of DNA on small glass surfaces. The chip-based DNA can bind with the DNA or RNA being tested to determine which genes are present or are being activated. Used in conjunction with DNA and RNA databases under development at various universities and other research institutions, this testing technique has yielded insights into areas such as the ripening process of tomatoes and its relation to toxins and nutrients. The major advantage of gene chips over conventional testing techniques is that they allow small-scale analysis of thousands of genes at the same time in a precise and quantitative manner. According to a university scientist, researchers are determining the extent to which this technology may be effective in assessing GM food safety. Proteomics is a biotechnology technique used to identify many proteins simultaneously in a given organism. Using chemical analyses and computers, proteomics goes beyond plant studies focusing on DNA and RNA, which do not provide information on the actual creation of the proteins. Proteomics has been introduced successfully in medical disciplines such as oncology, where it has helped to identify proteins associated with cancer, but it has not yet been used to evaluate the safety of GM foods for two reasons. First, there are a large number of proteins that need to be analyzed in any given plant. Second, the function of proteins in a plant may change depending on their interaction with different cells and tissues. According to a university scientist, researchers are working to expedite the analysis of proteins in plants. Metabolic profiling uses chemical analyses and computers to obtain a simultaneous, detailed look at all of the small molecules (metabolites) in a given GM plant to determine the extent to which these molecules have changed in comparison to a conventional plant, if at all. According to scientists at one company involved in developing metabolic profiling, this technique can determine whether a specific, intended change in a small molecule has been achieved. It can also identify any unintended changes in other small molecules—changes such as increased alkaloids, which are a major source of toxicity in plants. If the profiling finds no unintended changes in these molecules, then it offers a reasonable certainty that the genetic modification has not led to any changes with potentially adverse health consequences. In general, metabolic profiling has not yet been used commercially. However, scientists working with this technique believe that it may play a potentially important role as a safety screening tool for companies developing complex, compositionally altered GM foods in the future. In addition, scientists state that it shows promise in the health care field in assessing the safety of future new drugs. Despite progress in developing and applying gene chips, proteomics, and metabolic profiling, technical limitations currently prevent their use to assess the safety of GM foods. Biotechnology experts told us that internal standards must be developed for the methods and chemicals used in these new technologies and that the reliability of these technologies must be proven. For example, in gene chip testing, experts state that standardization of the thousands of genes represented on the chips is essential to improve the quality of this technology. Further, experts state that the chemical analysis used in proteomics needs to be enhanced to improve its reliability. Beyond these technical challenges, however, lies a more fundamental problem. Because these new technologies are more sensitive, they may identify a flood of differences between conventional and GM food products that existing tests could not detect. Not all of these differences will stem from genetic modification. Some of the differences will stem from the tremendous natural variations in all plants caused by factors such as the maturity of the plants and a wide range of environmental conditions, such as temperature, moisture, amount of daylight, and unique soil conditions that vary by region of the country. For example, there can be a tenfold difference in the level of key compositional elements, such as nutrients, depending on the region in which soybeans are grown. Thus, according to a biotechnology company expert, it will be difficult to differentiate naturally occurring changes from the effects of deliberate genetic modifications. Industry and university scientists have expressed strong concerns about the problem of interpreting the potential significance of these differences. They believe that the new technologies will be of limited value unless baseline data on the natural variations of nutrients and other compositional values for each of the major food crops can be developed. However, experts disagree on the difficulty of developing this baseline. Some experts, including those at FDA, assert that developing the baseline will be difficult because of the extreme sensitivity of plants to environmental variations. Other experts, especially those pioneering the new techniques, state that a baseline can definitely be established in the next few years. Some companies have started to respond to the need for baseline information. New developments in technology have begun to provide an encyclopedic database on natural variations in plants and on the variations resulting from deliberate genetic modification. For example, using metabolic profiling, one company has analyzed approximately 150 characteristics, such as the size and rate of growth, of individual plants. The company has also examined about 12,000 genes in one species of plant—a member of the mustard family—and analyzed the consequences of eliminating or stimulating particular genes. About one million mustard plants of this type have been analyzed in this line of research. Even with the development of baseline data and the detection of differences, scientists will still need to evaluate the significance of these differences for human health. Appendix II provides more information regarding advancements in the development of baseline information and the experimental use of metabolic profiling to assess the safety of GM foods. Scientists and federal regulatory officials we contacted generally agreed that long-term monitoring of the human health risks of GM foods through epidemiological studies is not necessary because there is no scientific evidence suggesting any long-term harm from these foods. These scientists and officials also stated that it would be very difficult, if not impossible, to develop a process for monitoring the long-term health risks of GM foods because of the technical challenges in developing such a system. A recent report by the United Nations also expresses skepticism about the feasibility of identifying long-term health effects from GM foods. The scientists and federal regulatory officials generally agreed that because there is no scientific evidence that GM foods cause long-term harm, such as increased cancer rates, there is no plausible hypothesis of harm. Researchers need such a hypothesis in order to know what problem to search for, test, and potentially measure. For example, in the Framingham Heart Study of Massachusetts, researchers hypothesized that there were biological and environmental factors that contributed to cardiovascular disease. Using this hypothesis, researchers were able to design a study that established a relationship between the levels of cholesterol and the risk of heart disease. The resulting effort, comprising more than 10,000 participants over two generations (more than 50 years), developed groundbreaking information on the major risk factors associated with heart disease, stroke, and other diseases. For example, researchers found that a lifestyle typified by a faulty diet, sedentary living, or unrestrained weight gain exacerbated disease risk factors and influenced the occurrence of cardiovascular problems. Without a plausible hypothesis such as that used in the Framingham study, most scientists we contacted said that epidemiological studies on GM foods would not provide any useful information. Two of these scientists also noted that the primary ways in which foods might cause long-term harm are through (1) proteins that remain stable during human digestion, thereby retaining the potential to exert adverse effects such as a toxic reaction, and (2) detrimental changes in nutrients and other food components. However, for all 50 GM food plants reviewed by FDA as of April 2002, the genetically modified proteins in those foods that potentially could be cause for concern have been shown in tests to be rapidly digested. Further, the two GM food plants reviewed that produced modified oils—soybean and canola—had nutritional profiles that were similar to or better than their conventional counterparts. As discussed previously, the soybean oil was modified to be more nutritious than conventional soybean oil. The canola oil was modified to contain a higher level of laurate, which would allow it to substitute for imported tropical oils, such as palm kernel oil. However, industry determined that the total intake of laurate in the diet would not change significantly by substituting the improved canola oil for the tropical oil. Accordingly, industry officials stated, and FDA officials concurred, that long-term studies of health effects of this oil would not be needed. Scientists and federal regulatory officials also stated that there are substantial technical challenges that make long-term monitoring of the health effects of GM foods virtually impossible. The challenges cited include the following: Conducting long-term monitoring would require both an experimental group that has consumed GM foods and a control group. The control group would consist of people who could confirm that they do not eat GM foods. In countries such as the United States, where labeling is not required for GM foods, reliably identifying such control groups would be virtually impossible. Even if GM foods were labeled in the United States, it would be very difficult to separate the health effects of GM foods from those of their conventional counterparts, since to date there has been very little nutritional difference between these foods. Further, over long periods of time, there would be practical challenges in feeding both the experimental and controls groups diets comprising large amounts of GM food, such as soybeans or corn, and their conventional counterparts. Since the long-term human health effects of consuming most foods are not well understood, there is no baseline information against which to assess health effects caused by GM foods. Changes in human food consumption patterns, specifically the addition and removal of various foods, add new variables to the diet and compound the difficulty of conducting long-term monitoring. The fairly recent introduction of the kiwi fruit (to which some individuals are allergic) and the reduction of the use of cotton seed (to which some individuals have also been allergic) as a protein source in candy or breads illustrate the challenges in monitoring food consumption patterns when conducting a 20-to-30 year epidemiological study. A report issued in June 2000 by the United Nations’ Food and Agriculture Organization and World Health Organization supports the scientists’ and regulators’ views about the infeasibility of identifying long-term health effects from GM foods. The report states that, in general, very little is known about the potential long-term effects of any foods, and that identification of such effects is further confounded by the great variability in the way people react to foods. The report also states that epidemiological studies are not likely to differentiate the health effects of GM foods from the many undesirable effects of conventional foods, which according to scientists include the effects of consuming cholesterol and fats. Accordingly, the report concludes that the identification of long-term effects specifically attributable to GM foods is highly unlikely. Given the challenges to long-term monitoring, federal regulatory officials, as well as some U.S. and European scientists, state that the best defense against long-term health risks from GM foods is an effective pre-market safety assessment process. Biotechnology experts believe that the current regimen of tests has been adequate for ensuring that GM foods marketed to consumers are as safe as conventional foods. However, some of these experts also believe that the agency’s evaluation process could be enhanced. Specifically, FDA could verify companies’ summary test data on GM foods, thus further ensuring the accuracy and completeness of this data. In addition, the agency could more clearly explain to the public the scientific rationale for its evaluation of these foods’ safety, thereby increasing the transparency of, and public confidence in, FDA’s evaluation process. By addressing these issues, FDA’s assurance to consumers that GM foods are safe could be strengthened. To enhance FDA’s safety evaluations of GM foods, we recommend that the Deputy Commissioner of Food and Drugs direct the agency’s Center for Food Safety and Applied Nutrition to obtain, on a random basis, raw test data from companies, during or after consultations, as a means of verifying the completeness and accuracy of the summary test data submitted by companies; and expand its memos to file recording its decisions about GM foods to provide greater detail about its evaluations of the foods, including the level of evaluation provided, the similarity of the foods to foods previously evaluated, and the adequacy of the tests performed by the submitting companies. We provided FDA with a draft of this report for review and comment. In its written comments, FDA stated it believes that its current process for evaluating bioengineered foods provides appropriate oversight but agreed that enhancements can be made. Specifically, concerning the need to randomly review raw safety data, FDA agreed that occasional audits would provide additional assurance to the public that pre-market decisions about bioengineered foods are based on sound science and that safety and regulatory issues are resolved prior to commercial distribution. Concerning the expansion of its memos to file, the agency agreed that providing greater detail on its decisions about the safety of GM foods would enhance public understanding and confidence in the evaluation process. The agency noted that actions in its proposed rule—titled Premarket Notice Concerning Bioengineered Foods (66 FR 4706, January 18, 2001)—are relevant to our recommendations. FDA explicitly states it will evaluate whether to adopt occasional audits as it evaluates comments on its proposed rule. Since FDA officials told us that some of its proposed rule changes in the Federal Register have taken years to implement, we believe that the public’s interests would be served by implementing our recommendations separately from the proposed rule approval process. FDA also had general comments about the terms and definitions used in discussing agricultural biotechnology. FDA stated that our draft report avoided many of the pitfalls in terminology and in general was written in a manner that will be understandable to the public. However, the agency believes the use of terms such as “Genetically Modified Food” in the title and “GM food” in the text can be misleading and such foods are more commonly referred to as bioengineered foods. While perhaps the scientific community refers to these foods as bioengineered, the lay public is more familiar with the term genetically modified foods. Accordingly, we have continued to use the term genetically modified, which is defined on page one of our report. Separately from its written comments, FDA provided us with some technical changes, which we incorporated into the report where appropriate. FDA’s written comments are presented in appendix III. We performed our review from July 2001 through May 2002 in accordance with generally accepted government auditing standards. (See app. I for our objectives, scope, and methodology.) We are sending copies of this report to congressional committees with jurisdiction over food safety programs, the Deputy Commissioner of Food and Drugs, the Director, Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix IV. Representatives John Baldacci and John Tierney asked us to (1) identify the types of potential human health risks associated with genetically modified (GM) foods and experts’ views on the adequacy of tests used to evaluate these risks, (2) describe the Food and Drug Administration’s (FDA) controls for ensuring that companies submit test data it requests and identify experts’ views on the agency’s overall evaluations of these foods, (3) describe potential changes in future GM foods and any associated changes needed in tests to evaluate them, and (4) identify experts’ views on the necessity and feasibility of monitoring the long-term health risks of these foods. In addressing our review objectives, we interviewed representatives from U.S. consumer groups, academic and research institutions, federal regulatory agencies, and the biotechnology industry. We also E-mailed a set of questions to experts representing a variety of positions on biotechnology issues. We selected these experts in consultation with officials from the National Academy of Science’s National Research Council. These experts included scientists from the Center for Science in the Public Interest, the Union of Concerned Scientists, the Biotechnology Center of the University of Illinois, the Health Sciences Center of Tulane University, FDA, the Aventis Corp., the DuPont Corp., the Monsanto Corp., and Paradigm Genetics, Inc. In addition, we analyzed reports, policy documents, or issue papers from the Center for Science in the Public Interest, the Consumer Federation of America, the Union of Concerned Scientists, the Council for Agricultural Science and Technology, the National Academy of Sciences, the Pew Initiative on Biotechnology, the Environmental Protection Agency, FDA, the Biotechnology Industry Organization, the Institute of Food Technologists, the Codex Alimentarius, and the National Institute for Quality Control of Agricultural Products at the Wageningen University and Research Center of the Netherlands. We did not assess the potential environmental risks associated with GM food production. In addition, since there have been no GM animals evaluated for commercialization, we did not assess the potential environmental or human health risks associated with them. To identify the types of potential health risks of GM foods, we analyzed and synthesized information from the interviews, E-mail question responses, and documents regarding these risks. To identify tests commonly used by industry to assess GM food safety, we examined several FDA evaluations of GM food. In examining these evaluations, we also analyzed how FDA addresses any potential limitations in these tests and what guidance FDA provides to industry regarding scientifically acceptable tests. In our E-mail questions, we also asked the experts to describe any limitations to these tests, and then analyzed and synthesized their responses, particularly regarding test-specific limitations and suggestions for improving the tests. In addition, we asked whether there were any limitations to FDA’s guidance on acceptable tests. We then synthesized their responses, including suggestions for improving FDA’s guidance. To describe FDA internal controls for ensuring that companies submit safety test data requested by the agency, we interviewed FDA officials and reviewed agency documents about the functions of these internal controls, specifically (1) FDA’s 1992 Policy on Foods Derived from New Plant Varieties and its 1997 Guidance on Consultation Procedures that describe what safety data companies should submit; (2) the qualifications and roles of the FDA Biotechnology Evaluation Teams responsible for evaluating these submissions; and (3) FDA’s practice of matching its level of evaluation to the degree of novelty of the GM food submitted. Further, we compared the safety data specified in FDA’s 1992 policy with data provided by companies in five GM food submissions and analyzed the extent of the companies’ adherence to FDA’s recommended procedures for safety assessments. We contacted officials at the Department of Health and Human Services’ Office of Inspector General to determine if they had reviewed FDA’s internal controls. (They had not.) We did not, however, independently verify the adequacy of FDA’s internal controls. To identify experts’ views on the agency’s overall evaluations of GM foods, we interviewed consumer groups, industry officials, and other experts, analyzed their views and concerns—including any suggestions for improving FDA’s evaluation process—and reviewed related literature. For each concern identified with the process, we obtained FDA’s response and then determined the extent to which FDA’s response effectively addressed the concern or suggested a need for additional action by FDA. Further, we examined Office of Management and Budget and GAO guidance and policies relevant to these concerns. To describe the potential changes in future GM foods and associated changes needed in the tests to evaluate them, we interviewed scientists and regulators on the likely changes in GM foods and new testing approaches under development. We also focused several of our E-mail questions on this topic and analyzed the responses. In addition to E-mail respondents, we contacted experts from biotechnology companies concerning research on new, more complex GM foods as well as new testing approaches that may supplement or replace existing tests. We synthesized these respondents’ and experts’ views on likely changes to GM food and the value and challenges of using these new testing approaches. Further, we reviewed the relevant scientific literature for discussions of anticipated changes in GM foods and information on specific tests under development. We also met with scientists developing one of these new testing approaches to understand its potential value for assessing GM food safety. To identify the views of experts on the necessity and feasibility of monitoring the long-term health risks of GM foods, we asked respondents to our E-mail questions for an assessment of whether such an effort is necessary or feasible and then analyzed their responses. Further, we reviewed a variety of documents concerning the necessity and feasibility of long-term monitoring, including a recent joint United Nations’ Food and Agriculture Organization and World Health Organization report, as well as a recent report by the National Institute for Quality Control of Agricultural Products at the Wageningen University and Research Center of the Netherlands. We also discussed the topic with other regulatory officials connected with monitoring food safety. In particular, we discussed whether the long-term effects of GM foods could be separated from other factors that may influence human health. Finally, we submitted a draft of this report for technical review by scientists from industry, academia, and a consumer group, and we incorporated their comments as appropriate. We conducted our review from July 2001 through May 2002 in accordance with generally accepted government auditing standards. Metabolic profiling could be used as a safety-screening tool for GM foods. Specifically, as shown in figure 3, special software has allowed one company to graph the metabolic profile of one variety of mustard plants and analyze the effects of genetic modifications. In the figure, the vertical axis in each graph provides a list of different small molecules, or metabolites, in mustard plants from this variety. The horizontal axis measures variation or deviation from the metabolite levels in this conventional variety. The vertical line in the middle of each graph represents the average value for a range of small molecules, or metabolites, in this conventional variety. In this example, the company analyzed thousands of conventional plants from this variety to come up with a range of naturally occurring metabolite levels. The company then used the averages of these ranges to generate the vertical line in the middle of the graphs. The points plotted with squares represent the levels of small molecules in GM mustard plants. Points appearing to the right of the center vertical line indicate increased levels of specific small molecules, while points appearing to the left indicate decreased levels. The graphs in figure 3 illustrate three scenarios: graph (a) shows a GM mustard plant with small molecule levels nearly identical to its conventional counterpart; graph (b) shows a GM mustard plant with a few easily measurable decreases; and graph (c) represents a GM mustard plant with many significant differences from the small molecule levels of its conventional counterpart. If baseline data on normal ranges of variation, such as those developed for the mustard plants, can be made available for all GM food crops, companies might use this type of testing to develop safety data. For example, in graph (a), the absence of significant changes in the small molecules would strongly indicate that no significant changes had resulted from the genetic modification. Hence, a change in the risk of allergenicity, toxicity, or antinutrients would be very unlikely. In the case represented by graph (b), the software could determine which small molecules have changed. Then, traditional testing techniques such as toxicity testing, could be used to determine if the altered small molecules would have any effect on human health, plant growth, or crop yield. In the case shown in graph (c), scientists would probably not proceed with development and commercialization of the GM food in the absence of extensive evaluations for allergens, toxins, or antinutrients, due to the significant differences in small molecules between it and its conventional counterpart. In addition to the individuals above, Nathan J. Anderson, Dennis S. Carroll, Kurt W. Kershow, and Cynthia C. Norris made key contributions to this report. International Trade: Concerns Over Biotechnology Challenge U.S. Agricultural Exports GAO-01-727. Washington, D. C.: June 15, 2001. Biotechnology: Information on Prices of Genetically Modified Seeds in the United States and Argentina GAO/T-RCED/NSIAD-00-228. Washington, D. C.: June 29, 2000. Biotechnology: Information on Prices of Genetically Modified Seeds in the United States and Argentina GAO/RCED/NSIAD-00-55. Washington, D. C.: January 21, 2000.
Genetically modified foods pose the same risks to human health as do other foods. These risks include allergens, toxins, and compounds known as antinutrients which inhibit the absorption of nutrients. Before marketing a genetically modified food, company scientists seek to determine whether these foods pose any heightened risks. The Food and Drug Administration (FDA) published guidelines in 1992 to ensure that companies worked with the agency to assess the safety of genetically modified foods. GAO found that FDA's evaluation process could be enhanced by randomly verifying the test data provided and by increasing the transparency of the evaluation process, including communicating more clearly the scientific rationale for FDA's final decision on an assessment of genetically modified food. Scientists expect that genetic modifications will increasingly enhance the nutritional value of genetically modified foods. Although current tests have been adequate for evaluating the few genetically modified foods that have, so far, undergone relatively simple compositional changes, new technologies are being developed to evaluate the increasingly complex compositional changes expected. Monitoring the long-term health risks of genetically modified foods is generally neither necessary nor feasible. No scientific evidence exists, nor is there even a hypothesis, suggesting that long-term harm, such as higher cancer rates, results from these foods. Moreover, technical challenges make long-term monitoring infeasible.
EPA administers and oversees grants primarily through the Office of Grants and Debarment, 10 program offices in headquarters, and program offices and grants management offices in EPA’s 10 regional offices. Figure 1 shows EPA’s key offices involved in grants activities for headquarters and the regions. The management of EPA’s grants program is a cooperative effort involving the Office of Administration and Resources Management’s Office of Grants and Debarment, program offices in headquarters, and grants management and program offices in the regions. The Office of Grants and Debarment develops grant policy and guidance. It also carries out certain types of administrative and financial functions for the grants approved by the headquarters program offices, such as awarding grants and overseeing the financial management of these grants. On the programmatic side, headquarters program offices establish and implement national policies for their grant programs, and set funding priorities. They are also responsible for the technical and programmatic oversight of their grants. In the regions, grants management offices carry out certain administrative and financial functions for the grants, such as awarding grants approved by the regional program offices, while the regional program staff provide technical and programmatic oversight of their grantees. As of June 2003, 109 grant specialists in the Office of Grants and Debarment and the regional grants management offices were largely responsible for administrative and financial grant functions. Furthermore, 1,835 project officers were actively managing grants in headquarters and regional program offices. These project officers are responsible for the technical and programmatic management of grants. Unlike grant specialists, however, project officers generally have other primary responsibilities, such as using the scientific and technical expertise for which they were hired. In fiscal year 2002, EPA took 8,070 grant actions totaling about $4.2 billion. These awards were made to six main categories of recipients as shown in figure 2. EPA offers two types of grants—nondiscretionary and discretionary: Nondiscretionary grants support water infrastructure projects, such as the drinking water and clean water state revolving fund programs, and continuing environmental programs, such as the Clean Air Program for monitoring and enforcing Clean Air Act regulations. For these grants, Congress directs awards to one or more classes of prospective recipients who meet specific eligibility criteria; the grants are often awarded on the basis of formulas prescribed by law or agency regulation. In fiscal year 2002, EPA awarded about $3.5 billion in nondiscretionary grants. EPA has awarded these grants primarily to states or other governmental entities. Discretionary grants fund a variety of activities, such as environmental research and training. EPA has the discretion to independently determine the recipients and funding levels for grants. In fiscal year 2002, EPA awarded about $719 million in discretionary grants. EPA has awarded these grants primarily to nonprofit organizations, universities, and government entities. The grant process has the following four phases: Preaward. EPA reviews the application paperwork and makes an award decision. Award. EPA prepares the grant documents and instructs the grantee on technical requirements, and the grantee signs an agreement to comply with all requirements. Postaward. After awarding the grant, EPA provides technical assistance, oversees the work, and provides payments to the grantee; the grantee completes the work, and the project ends. Closeout of the award. EPA ensures that all technical work and administrative requirements have been completed; EPA prepares closeout documents and notifies the grantee that the grant is completed. As part of its oversight of grantee performance, EPA conducts in-depth reviews to analyze grantees’ compliance with grant regulations and specific grant requirements. EPA conducts two types of in-depth reviews. Administrative reviews, conducted by the grants management offices, are designed to evaluate grantees’ financial and administrative capacity. In contrast, programmatic reviews, conducted by the program offices, are designed to assess the grantees’ activities in five key areas: (1) assessing progress of work, (2) reviewing financial expenditures, (3) meeting the grant’s terms and conditions, (4) meeting all programmatic, statutory, and regulatory requirements, and (5) verifying that equipment purchased under the award is managed and accounted for. Both administrative and programmatic reviews are conducted either at the grantee’s location (on- site) or at EPA’s office or another location (off-site). Furthermore, to determine how well offices and regions oversee grantees, EPA conducts internal management reviews of headquarters and regional offices. EPA’s September 2002 competition policy requires that most discretionary grants be competed. These grants totaled about $719 million of the $4.2 billion in grants awarded in fiscal year 2002. The policy applies to most discretionary grant programs or individual grants of more than $75,000.The policy also promotes widespread solicitation for competed grants by establishing specific requirements for announcing funding opportunities in, for example, the Federal Register and on Web sites. EPA has also appointed a grant competition advocate to coordinate this effort. EPA’s competition policy faces implementation barriers because it represents a major cultural shift for EPA staff and managers, who historically awarded most grants noncompetitively and thereby have had limited experience with competition, according to the Office of Grants and Debarment. The policy requires EPA officials to take a more planned, rigorous approach to awarding grants. That is, EPA staff must determine the evaluation criteria and ranking of these criteria for a grant, develop the grant announcement, and generally publish it at least 60 days before the application deadline. Staff must also evaluate applications—potentially from a larger number of applicants than in the past—and notify applicants of their decisions. These activities will require significant planning and take more time than awarding grants noncompetitively. Office of Grants and Debarment officials anticipate a learning curve as staff implement the policy and will evaluate the policy’s effectiveness in 2005, including the $75,000 threshold level. While the policy and subsequent implementing guidance have been in effect for a number of months, it is too early to tell if the policy has resulted in increased competition over the entire fiscal year. EPA officials believe that preliminary results indicate that the policy is increasing the use of competition. EPA’s December 2002 oversight policy makes important improvements in monitoring grantees, but it does not enable the agency to identify and address systemic problems with grant recipients. Specifically, EPA cannot develop systemic information because the policy does not (1) incorporate a statistical approach to selecting grantees for review; (2) require a standard reporting format for in-depth reviews to ensure consistency and clarity in reporting review results; and (3) identify needed data elements or develop a plan for analyzing data in its grantee compliance database to identify and act on systemic grantee problems. Therefore, EPA cannot use data from these reviews to determine the overall compliance of grantees or be assured that it is using its resources to effectively target its oversight efforts. With a more rigorous statistical approach to selecting grantees, standard reporting format, and a plan for using information from in-depth and other reviews, EPA could identify problem areas and develop trends to assess the effectiveness of corrective actions in order to better target its oversight efforts. EPA’s new policy allows each office to determine what criteria it will use to select at least 10 percent of its grant recipients for in-depth review. However, because this policy does not employ a statistical method to selecting grantees for review, it limits the usefulness of these reviews as a tool to determine the overall compliance of grant recipients. Furthermore, EPA cannot determine whether 10 percent or any other percentage is the appropriate number of reviews. With a statistical approach, EPA could increase the efficiency and effectiveness of its oversight of grantees by (1) adjusting the number and allocation of its in-depth reviews to match the level of risk associated with each type of grant recipient and (2) projecting the results of its reviews to all EPA grantees. EPA’s in-depth reviews can provide valuable information that the agency can use to identify problems and implement corrective actions. However, EPA does not have a standard reporting format to ensure consistency, clarity, and usefulness in reporting review results. Consequently, EPA is not able to effectively and efficiently analyze these data to determine systemic grantee problems. Although EPA was requiring offices to conduct in-depth review of grantees in 2002, it did not systematically collect and analyze information from these reviews as part of its oversight efforts. We requested that EPA provide us with its in-depth reviews conducted in 2002 so we could do the analysis. Many of the documents EPA provided were, not in fact, in-depth reviews, but various types of other oversight documents. We sorted through these documents to identify the in-depth reviews using a data collection instrument. Through this approach, we identified 1,232 in-depth reviews. Using a data collection instrument, we collected and analyzed information from each of these in-depth reviews on, among other things, problems with grantees, and significant actions taken against grantees. The full results of our analysis are presented in our report. According to our analysis of EPA’s 1,232 in-depth reviews in 2002, EPA grant specialists and project officers identified 1,250 problems in 21 areas. Tables 1 and 2 show the most frequently identified problems for the 189 administrative and 1,017 programmatic reviews we examined. For example, 73 of 189 administrative reviews found problems with grantees’ written procedures, while 308 of the 1,017 programmatic reviews identified technical issues. The differences in types of problems frequently identified, as shown in tables 1 and 2, reflect differences in the focus of administrative and programmatic reviews. Table 3 describes the nature of these problems. Despite the importance of standard information, our analysis of EPA’s 2002 in-depth reviews shows that EPA officials across the agency report in various formats that do not always clearly present the results of the review. For example, some EPA officials provided a narrative report on the results of their reviews, while others completed a protocol that they used in conducting their review. In 349 instances, the project officer or grant management specialist did not clearly explain whether he or she had discovered a problem. EPA has recognized the importance of the information in its in-depth reviews by establishing a grantee compliance database to store the reviews, forming a database work group, and collecting a limited amount of data from its in-depth reviews. However, as of August 29, 2003, EPA had not yet developed data elements or a plan for using data from all its oversight efforts—in-depth reviews, corrective actions, and other compliance efforts—to fully identify systemic problems and then inform grants management officials about oversight areas that need to be addressed. As our analysis of EPA’s 2002 in-depth reviews showed, valuable information could be collected from them for assessing such issues as the (1) types of grantees having problems, (2) types of problem areas needing further attention, (3) types of reviews—on-site or off-site—that provide the best insights into certain problems areas, and (4) corrective actions required or recommended to resolve problems. With a statistical approach to selecting grantees for review, standard reporting format, and a plan for using information from in-depth and other reviews, EPA could identify problem areas and develop trends to assess the effectiveness of corrective actions to better target its oversight efforts. In particular, according to our analysis of EPA’s 2002 in-depth reviews, administrative reviews identify more problems when conducted on site, while the number of problems identified by programmatic reviews does not differ by on-site or off-site reviews. However, nearly half of the programmatic reviews, which constituted more than 80 percent of the 2002 reviews, were conducted on-site. Since on-site reviews are resource intensive because of travel costs and staff used, a systematic analysis could enable EPA to better target its resources. Similarly, EPA could incorporate other information into its grantee compliance database, such as Inspector General reports, to identify problem areas, and target oversight resources. In addition, EPA could use the database to track the resolution of problems. Successful implementation of EPA’s 5-year grants management plan requires all staff—senior management, project officers, and grant specialists—to be fully committed to, and accountable for, grants management. Recognizing the importance of commitment and accountability, the plan has as one of its objectives the establishment of clear lines of accountability for grants oversight. The plan, among other things, calls for (1) ensuring that performance standards established for grant specialists and project officers adequately address grants management responsibilities in 2004; (2) clarifying and defining the roles and responsibilities of senior resource officials, grant specialists, project officers, and others in 2003; and (3) analyzing project officers’ and grant specialists’ workload in 2004. In implementing this plan, however, EPA faces challenges to enhancing accountability. First, although the plan calls for ensuring that project officers’ performance standards adequately address their grants management responsibilities, agencywide implementation may be difficult. Currently, project officers do not have uniform performance standards, according to officials in EPA’s Office of Human Resources and Organizational Services. Instead, each supervisor sets standards for each project officer, and these standards may or may not include grants management responsibilities. It could take up to a year to establish and implement a uniform performance standard, according to these officials. Instead, the Assistant Administrator for the Office of Administration and Resources Management is planning to issue guidance this month including grants management responsibilities in individual performance agreements for the next performance cycle beginning in January 2004. Once individual project officers’ performance standards are established for the approximately 1,800 project officers, strong support by managers at all levels, as well as regular communication on performance expectations and feedback, will be key to ensuring that staff with grants management duties successfully meet their responsibilities. Although EPA’s current performance management system can accommodate the development of performance standards tailored to each project officer’s specific grants management responsibilities, the current system provides only two choices for measuring performance— satisfactory or unsatisfactory—which may make it difficult to make meaningful distinctions in performance. Such an approach may not provide enough information and dispersion in ratings to recognize and reward top performers, help everyone attain their maximum potential, and deal with poor performers. GAO has identified key practices that federal agencies can use to establish effective performance management systems, which include making distinctions in performance. Furthermore, it is difficult to implement performance standards that will hold project officers accountable for grants management because (1) grants management is often a small part of a wide range of project officers’ responsibilities, (2) some project officers manage few grants, and (3) project officers’ grants management responsibilities often fall into the category of “other duties as assigned.” To address this issue, EPA officials are considering, among other options, whether the agency needs to develop a smaller cadre of well-trained project officers to oversee grantees, rather than rely on the approximately 1,800 project officers with different levels of grants management responsibilities and skills. Some EPA officials believe that having a cadre may help the agency more effectively implement revised grants management performance standards because fewer officers with greater expertise would oversee a larger percentage of the grants. Second, EPA will have difficulty achieving the plan’s goals unless, not only project officers, but all managers and staff are held accountable for grants management. The plan does not call for including grants management standards in all managers’ and supervisors’ agreements. Senior grants managers in the Office of Grants and Debarment as well as other Senior Executive Service managers have performance standards that address grants management responsibilities, but middle-level managers and supervisors, who oversee many of the staff that have important grants management responsibilities, do not. According to Office of Grants and Debarment officials, they are working on developing performance standards for all managers and supervisors with grants responsibilities. Third, it may be difficult to hold all managers and staff accountable because the Office of Grants and Debarment does not have direct control over many of the managers and staff who perform grants management duties—particularly the approximately 1,800 project officers in headquarters and regional program offices. The division of responsibilities between the Office of Grants and Debarment and program and regional offices will continue to present a challenge to holding staff accountable and improving grants management, and will require the sustained commitment of EPA’s senior managers. If EPA is to better achieve its environmental mission, it must more effectively manage its grants programs—which account for more than half of its annual budget. EPA’s new policies and 5-year grants management plan show promise, but they are missing several critical elements necessary for the agency to address past grants management weaknesses. Specifically to improve EPA’s oversight of grantees, our report recommends that EPA’ (1) incorporate appropriate statistical methods to identify grantees for review; (2) require EPA staff to use a standard reporting format for in-depth review so that the results can be entered into the grantee compliance database and analyzed agency wide; and (3) develop a plan, including modifications to the grantee compliance database, to integrate and analyze compliance information from multiple sources. These actions would help EPA identify systemic problems with its grantees and better target its oversight resources. To enhance accountability, our report further recommends establishing performance standards for all managers and staff responsible for grants management and holding them accountable for meeting these standards. Until EPA does so, it cannot be assured that is fulfilling its grants management responsibilities. While EPA’s 5-year grants management plan shows promise, we believe that, given EPA’s historically uneven performance in addressing its grants management challenges, congressional oversight is important to ensure that EPA’s Administrator, managers, and staff implement the plan in a sustained, coordinated fashion to meet the plan’s ambitious targets and time frames. To help facilitate this oversight, our report recommends that EPA annually report to Congress on its progress in improving grants management. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. For further information about this testimony, please contact John B. Stephenson at (202) 512-3841. Individuals making key contributions to this testimony were Andrea Wamstad Brown, Carl Barden, Christopher Murray, Paul Schearf, Rebecca Shea, Carol Herrnstadt Shulman, Bruce Skud, Kelli Ann Walther, and Amy Webbink. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Environmental Protection Agency (EPA) has faced persistent challenges in managing its grants, which, at about $4 billion annually constitute over one-half of the agency's total budget. EPA awards grants to thousands of recipients to implement its programs to protect human health and the environment. Given the size and diversity of EPA's programs, its ability to efficiently and effectively accomplish its mission largely depends on how well it manages its grant resources and builds accountability into its efforts. In our comprehensive report on EPA's management of its grants, released last week, we found that EPA continues to face four key grants management challenges despite past efforts to address them--(1) selecting the most qualified grant applicants, (2) effectively overseeing grantees, (3) measuring the results of grants, and (4) effectively managing its grant staff and resources. The report also discusses EPA's latest competition and oversight policies and its new 5-year plan to improve the management of its grants. This testimony, based on our report, focuses on the extent to which EPA's latest policies and plan address (1) awarding grants competitively, (2) improving oversight of grantees, and (3) holding staff and managers accountable for fulfilling their grants management responsibilities. Late in 2002, EPA launched new efforts to address some of its long-standing grants management problems. It issued two policies--one to promote competition in awarding grants and one to improve its oversight of grants. Furthermore, in April 2003, EPA issued a 5-year grants management plan to address its long-standing grants management problems. These policies and plan focus on the major grants management challenges we identified but will require strengthening, enhanced accountability, and sustained commitment to succeed. EPA's September 2002 competition policy should improve EPA's ability to select the most qualified applicants by requiring competition for more grants. However, effective implementation of the policy will require a major cultural shift for EPA managers and staff because the competitive process will require significant planning and take more time than awarding grants noncompetitively. EPA's December 2002 oversight policy makes important improvements in monitoring grantees, but it does not build in a process for effectively and efficiently analyzing the results of its monitoring efforts to address systemic grantee problems. Specifically, EPA does not (1) use a statistical approach to selecting grantees for review, (2) collect standard information from the reviews, and (3) analyze the results to identify and resolve systemic problems with grantees. As a result, EPA may not be using its oversight resources as efficiently as it could. With improved analysis, EPA could better identify problem areas and assess the effectiveness of its corrective actions to more efficiently target its oversight efforts. EPA's 5-year grants management plan recognizes the importance of accountability, but it does not completely address how the agency will hold all managers and staff accountable for successfully fulfilling their grants management responsibilities. For example, the plan calls for developing performance standards for staff overseeing grantee performance, but it does not call for including grants management performance standards in their managers' and supervisors' performance agreements. Unless all managers and staff are held accountable for grants management, EPA cannot ensure the sustained commitment required for the plan's success. Our report, Grants Management: EPA Needs to Strengthen Efforts to Address Persistent Challenges, GAO-03-846 , details EPA's historically uneven performance in addressing its grants management challenges. Over the years, EPA's past actions to improve grants management have had mixed results because of the complexity of the problems, weaknesses in policy design and implementation, and insufficient management attention to overseeing grants. While EPA's latest policies and new 5-year grants management plan show promise, it is too early to tell if these will succeed more than past actions. If EPA is to better achieve its environmental mission, it must more effectively manage its grants. Our report contains specific recommendations to address critical weaknesses in EPA's new oversight policy and plan. EPA stated that it agreed with GAO's recommendations and it will implement them as part of its 5- year grants management plan.
The Federal Reserve Act established the Federal Reserve as the central bank of the United States. The Federal Reserve comprises the Board of Governors—an agency of the federal government in Washington, D.C., the 12 Reserve Banks, and the Federal Open Market Committee (FOMC). FOMC comprises all members of the Board of Governors and five Reserve Bank presidents who serve on a rotating basis. The Federal Reserve Act gives the Federal Reserve responsibility for setting and implementing monetary policy—actions taken to influence the availability and cost of money and credit—to promote full employment and ensure stable prices. To this end, section 19 of the Federal Reserve Act requires the Board of Governors to impose reserve requirements within certain ratios on specified liabilities—transaction accounts, nonpersonal time deposits, and Eurocurrency liabilities—of all depository institutions, solely for the purpose of implementing monetary policy. The definition of “transaction account” was added to section 19 in 1980 and defines “transaction account” to mean an account that permits the account holder to make withdrawals by negotiable or transferable instruments (such as checks), payment orders of withdrawal, telephone transfers, and other similar items for the purpose of making payments to third parties or others. The Board of Governors promulgated Regulation D pursuant to section 19’s authorization “to prescribe such regulations as it may deem necessary to effectuate the purposes of this section and to prevent evasions thereof.” The Federal Reserve Act also assigns other responsibilities to the Board of Governors and to the Reserve Banks in addition to monetary policy responsibilities, including supervising and regulating certain financial institutions and activities, providing banking services to depository institutions and the federal government, and ensuring consumer protection in the banking system. The Federal Reserve Act requires the Board of Governors and FOMC to take measures aimed at promoting the goals of maximum employment, stable prices, and moderate long-term interest rates. Accordingly, before the financial crisis of 2007–2009, the Board of Governors and FOMC set monetary policy to promote national economic goals by targeting the cost of overnight loans between depository institutions (interbank loans), which influence other interest rates, and then adjusting the supply of reserve balances in the banking system to achieve that target. The relevant cost of interbank loans in this approach is the federal funds rate—the interest rate at which depository institutions lend reserve balances to other depository institutions overnight. According to the Federal Reserve, FOMC has targeted the federal funds rate since roughly 1984. Before the financial crisis, the Federal Reserve’s toolkit for implementing monetary policy primarily comprised three tools: Reserve requirements: The minimum amount of funds that depository institutions must hold against transaction account balances (determined by applying a specified reserve requirement ratio). Currently, only transaction accounts are subject to a reserve requirement ratio greater than zero. As noted previously, the Federal Reserve Act authorizes the Board of Governors to impose reserve requirements only on certain deposit liabilities that do not include savings deposits. Depository institutions may satisfy reserve requirements by holding vault cash or reserve balances at Reserve Banks. The Board of Governors is responsible for establishing reserve requirements, and can adjust reserve requirements by changing reserve requirement ratios within limits established by the Federal Reserve Act. Open market operations: The purchase and sale of federal government and federal agency securities in the open market by the Reserve Banks at the direction of FOMC. The Federal Reserve can use open market operations to adjust the supply of reserve balances in the banking system overall in order to control the federal funds rate within the target range set by FOMC. Open market operations directly affect the total supply of reserves: purchases of securities (such as Treasury securities) by the Federal Reserve increase reserve balances; the sale of securities has the opposite effect on reserve balances. Adjusting the supply of reserve balances in the banking system through open market operations helps the Federal Reserve control the federal funds rate. To lower the federal funds rate, the Federal Reserve increases the supply of reserve balances; decreasing the supply of reserve balances has the opposite effect on the federal funds rate. Lower interest rates lower the cost of borrowing, generally leading to increases in consumption and business investment. Discount rate: The interest rate that Reserve Banks charge on loans to depository institutions. The Reserve Bank lending function is often generically referred to as “the discount window.” The discount window allows the Federal Reserve Banks to extend credit to depository institutions under certain conditions. This complements open market operations in achieving the target federal funds rate by making balances available to depository institutions when the supply of balances falls short of demand, and by serving as a backup source of liquidity for individual depository institutions. If a depository institution needs to borrow funds to meet reserve requirements or for other operational needs, it typically will try to borrow at (or near) the federal funds rate from another depository institution in the federal funds market. If it has established borrowing privileges at the discount window, a depository institution may borrow directly from its Reserve Bank at the discount rate, which is set above the target federal funds rate. There are three discount window programs: primary credit, secondary credit, and seasonal credit, each with its own interest rate. A generic reference to “the discount rate” usually refers to the primary credit rate. The rate for each lending program is established by each Reserve Bank’s board of directors, subject to the review and determination of the Board of Governors. Pre-crisis, FOMC would set a target for the federal funds rate consistent with its monetary policy objectives of maximum employment and price stability and then direct the use of open market operations to achieve a federal funds rate at or very close to the target rate. The Federal Reserve conducted open market operations to maintain the federal funds rate within the target range. Specifically, the Federal Reserve implemented monetary policy by affecting the demand for and supply of reserves (reserve balances held at Reserve Banks). In the federal funds market, depository institutions and other eligible entities, including the government-sponsored enterprises (GSE), trade reserves (federal funds) with each other. By conducting open market operations, imposing reserve requirements, and extending credit through the discount window, the Federal Reserve exercised considerable control over the demand for and supply of reserves and in turn the federal funds rate. Since changes in the federal funds rate are transmitted to other short-term interest rates, which affect longer-term interest rates and overall financial conditions, the Federal Reserve used its three main policy tools to influence inflation and overall economic activity and to achieve its monetary policy goals. According to Board of Governors officials, in response to the 2007–2009 financial crisis, the Federal Reserve expanded its monetary policy toolkit. First, in 2008, the Federal Reserve began to use its authority to pay interest on reserves, enabling the Board of Governors to break the strong link between the quantity of reserves and the level of the federal funds rate allowing for control over short-term interest rates with a large amount of reserves in the system. The Federal Reserve also added two other major tools to its toolkit: large-scale asset purchases and increasingly explicit forward guidance. Both of these tools were used to provide further monetary policy accommodation after short-term interest rates fell close to zero. The Federal Reserve used its open market operations purchase authority to purchase longer-term government securities and agency securities to put downward pressure on longer-term interest rates, ease broader financial market conditions, and support economic activity. During the crisis, the Federal Reserve also established emergency lending programs to lend directly to banks and other financial institutions in disrupted markets to improve the flow of credit to U.S. households and businesses. Currently, the Federal Reserve’s monetary policy approach involves FOMC setting a target range for the federal funds rate consistent with achieving the monetary policy goals of maximum employment and stable prices and directing the use of the Federal Reserve’s monetary policy toolkit to meet this target. Regulation D, pursuant to section 19 of the Federal Reserve Act, imposes reserve requirements on transaction accounts solely for the purpose of implementing monetary policy. For example, according to officials from the Board of Governors, before 2008, reserve requirements were useful for implementing monetary policy because they provided the Federal Reserve with a predictable demand for reserves against which the supply of reserves could be adjusted to control interest rates (i.e., move or maintain the federal funds rate to or at the target set by FOMC). Given the Federal Reserve’s mandate of promoting maximum employment and stable prices, a predictable demand for reserves allowed the Federal Reserve to have a predictable effect on interest rates through its targeting of the federal funds rate and conducting open market operations to adjust the supply of reserves, according to officials. As noted previously, the Federal Reserve Act requires the Board of Governors to impose reserve requirements on specified deposit liabilities of depository institutions, but provides in most cases for a range of reserve requirement ratios that the Board of Governors may apply to such liabilities. Reserve requirements on transaction accounts are based on ratios that the Board of Governors has specified in Regulation D within ranges established in the Federal Reserve Act, and these ratios have not changed since 1992. The reserve requirement ratios are graduated—zero percent, 3 percent, and 10 percent—depending on the aggregate level of a depository institution’s net transaction accounts (see table 1). The dollar amount of a depository institution’s reserve requirement is determined by applying the applicable reserve requirement ratios to the balances held in its net transaction accounts. The reserve requirement ratios correspond to three ranges for net transaction account balances: “exemption amount” (amount of net transaction accounts subject to a zero percent reserve requirement), “low reserve tranche” (amount of net transaction accounts subject to a 3 percent reserve requirement ratio), and “over low reserve tranche” (amount of net transaction accounts subject to a 10 percent reserve requirement). The Depository Institutions Deregulation and Monetary Control Act of 1980 (Monetary Control Act) requires that the Board of Governors apply transaction account reserve requirements uniformly to all transaction accounts at all depository institutions. As shown in table 2, transaction account reserve requirement ratios have not changed since 1992, when the Board of Governors reduced the reserve requirement ratio on transaction accounts over the low reserve tranche amount from 12 percent to 10 percent. In 1990, the Board of Governors reduced the reserve requirement ratio on short-term nonpersonal time deposits and Eurocurrency liabilities from 3 percent to 0 percent. The Board of Governors is required to adjust the maximum amount of transaction account balances subject to the 3 percent ratio (the low reserve tranche) and exemption amounts annually, according to formulas specified in the Monetary Control Act and the Garn-St Germain Act, respectively. The Board of Governors is not required by law to adjust reserve requirement ratios annually or on any other schedule. The adjustments to the annual tranche and exemption amounts affect the amount of net transaction accounts subject to the low reserve tranche and exemption amounts. An increase in the low reserve tranche or exemption amount reduces a depository institution’s overall reserve requirement, all else being equal, by reducing or eliminating reserve requirements on additional deposit dollars. The low reserve tranche amount may increase or decrease depending on the change in deposit levels specified in the statutory formula. The exemption amount, however, may only increase or remain unchanged. An increase depends on an increase in deposit levels specified in the statutory formula. In the event of a decrease in those deposit levels, however, the exemption amount remains unchanged. Thus, as total deposits increase, the tranche and exemption adjustments have the effect of keeping the overall effective reserve ratio (the percentage of all depository institutions’ total transaction deposit balances held as reserves) approximately constant, as reflected in figure 1. In particular, figure 1 shows that the exemption amount has remained fairly steady since 1982 while the tranche amount increased significantly between 2008 and 2016. The figure also shows that the trend for the overall effective reserve ratio of all depository institutions has remained relatively constant from 2004–2016. Annual tranche adjustments moderately affect the effective reserve ratio of the banking sector, and therefore can influence, in theory, the money supply. Over the last 10 years, the effective reserve ratio has been around 8 percent, suggesting that the formulas in the Monetary Control Act and the Garn-St Germain Act keep the adjustments relatively neutral in their effect on the money supply. According to Board of Governors officials, before 2008 when the Federal Reserve adjusted the supply of bank reserves to target the federal funds rate, a stable demand for reserves allowed the Federal Reserve to have greater control over the federal funds rate in the conduct of monetary policy. The Regulation D six-transaction limit (on certain types of transfers and withdrawals) for savings deposit accounts helps implement reserve requirements. According to the Board of Governors, the transaction limit is critical for implementing reserve requirements because it allows the Board of Governors to distinguish between transaction (e.g., checking) accounts, which are subject to reserve requirements, and savings deposits, which are not subject to reserve requirements. Reaffirmation of Transaction Limit In 2009, the Board of Governors of the Federal Reserve System (Board of Governors) amended Regulation D’s transaction limit rule for savings deposits. The Board of Governors eliminated a “sublimit” on check and debit card transfers or withdrawals (previously, only three out of six transfers and w ithdrawals per month or statement cycle could be made by check, debit card or similar order). By eliminating the sublimit, the Board of Governors included such transfers and w ithdrawals within the overall limit of six convenient transfers or withdrawals (preauthorized, automatic, or telephonic transactions that provide ease in making payments to third parties) from savings deposits per month or statement cycle. The Board of Governors thus clarified that any convenient transfer or withdrawal from a savings deposit must be limited to not more than six per month or statement cycle. In response to public comments requesting an increase of the overall limit, the Board of Governors affirmed that section 19 of the Federal Reserve Act requires imposition of reserve requirements on transaction accounts and not on other types of account. Accordingly, the Board of Governors must maintain the capacity to distinguish betw een transaction accounts and savings deposits. The distinction is based on convenience: the greater the number of convenient transfers and w ithdrawals permitted per month from a “savings deposit,” the greater the difficulty in distinguishing such an account from a transaction account. Thus, the Board of Governors determined it w ould neither increase the number of convenient transfers and w ithdrawals permitted from savings deposits per month nor eliminate numeric limits entirely (on online transfers in particular or all convenient transfers and w ithdrawals from savings deposits in general). The Board of Governors has specified in Regulation D the manner of distinguishing between transaction accounts and savings deposits. Regulation D currently requires that for an account to be classified as a “savings deposit,” it must permit the depositor to make no more than six convenient transfers or withdrawals per month (or statement cycle of at least 4 weeks) from the account. According to the Board of Governors, the six-transaction limit originated from a mix of statutory and regulatory factors. Title II of the Monetary Control Act, the Depository Institutions Deregulation Act of 1980, created the Depository Institution Deregulation Committee to oversee the phase- out of limitations on interest rates previously applicable to various types of deposit accounts. In 1982, Congress passed the Garn-St Germain Act, which required the committee to authorize a new deposit account—the money market account. Specifically, the Garn-St Germain Act directed the committee to issue a regulation authorizing the money market account, which was intended to be a deposit account that was directly equivalent to and competitive with money market mutual fund accounts. The Garn-St Germain Act provided that money market accounts were not to be considered transaction accounts, even if such accounts permitted “up to three preauthorized or automatic transfers and three transfers to third parties” monthly. The committee interpreted this provision as permitting up to six preauthorized or automatic transfers to third parties per month or statement cycle from the money market account, not more than three of which could be made by check. This was sometimes referred to as the “six-three limit” on payments and transfers from such accounts, according to the Board of Governors. Based on the committee’s interpretation, in Regulation D, the Board of Governors determined that a money market account would not be subject to transaction account reserve requirements if the account did not permit more than six convenient (i.e., preauthorized, automatic, or telephonic) transfers and withdrawals per month, where not more than three of such transfers or withdrawals could be made by check or draft drawn by the depositor. Before 1982, there was no monthly numeric limit on transfers and withdrawals for savings deposits though such accounts were included in the definition of “transaction accounts,” which were subject to reserve requirements. The committee ceased to exist after 1986; however, the Board of Governors subsequently retained the “six-three” transaction limit by incorporating it into the definition of “savings deposits” in Regulation D, in part, because the limit was a feature of an account type authorized by Congress that was still in use by depository institutions. In 2009, the Board of Governors amended Regulation D to eliminate the “three” component from the “six-three limit” to make all convenient payments and transfers from savings deposits, including those made by check, debit card, or similar order, subject to the monthly limit of six. According to the Board of Governors, Regulation D currently distinguishes between types of transfers and withdrawals from savings deposits for purposes of the six-transaction limit (see table 3). The types of transfers and withdrawals that are subject to the six-transaction limit are those that are convenient, such as preauthorized, automatic or telephonic transfers or withdrawals (including by fax, email, or through an Internet banking service) by check or debit card. Other types of transfers and withdrawals that are less convenient, such as those made in person or at an ATM, may be made in an unlimited number, according to the Board of Governors. The Board of Governors notes that the rationale for limiting the number of convenient transactions from savings deposits is to ensure that such accounts are not used as transaction accounts without their balances being subject to the reserve requirements for transaction accounts. The Board of Governor’s criteria for distinguishing between transaction accounts and savings deposits under Regulation D are based on the ease with which the depositor may make transfers (payments to third parties) or withdrawals (payments directly to the depositor) from the account. The critical element of the rationale is the nature of the instruction for the transaction (that is, the instruction directing the third party payment or transfer to be made), according to Board of Governors officials. They noted that the more convenient the manner of instructing withdrawals or transfers to be made from an account is—such as preauthorized or automatic transfers—the more likely it is that the account will be used for making payments or transfers to third parties rather than for holding savings. Therefore, Regulation D limits the number of certain convenient types of transfers or withdrawals that an account holder may make in a single month or statement cycle from an account if that account is to be classified as a savings deposit and exempt from reserve requirements. A retail banking industry association and representatives from depository institutions have noted that there may be a disconnect between this rationale and its application—because automated teller machine (ATM) transactions may be viewed as convenient, but they are unlimited. According to the Board of Governors, Regulation D does not limit ATM transactions partly because ATMs formerly were considered to be “branches” of a depository institution. Therefore, appearing at an ATM was substantially similar to appearing in person at a brick and mortar branch of the depository institution. In addition, a withdrawal from an ATM is generally considered to be a payment or transfer directly to the depositor, rather than a payment to a third party as contemplated under the statutory definition of transaction account. Furthermore, an ATM withdrawal requires the account holder to appear physically at the ATM location. Therefore, the Board of Governors does not deem an ATM transaction to be a convenient method for making third-party payments, and transfers and withdrawals initiated at an ATM have not been subject to the numeric limits on transfers and withdrawals from savings deposits. Implementing and enforcing transaction account reserve requirements (and therefore the distinction between reservable transaction accounts and nonreservable savings deposits through the “savings deposit” definition in Regulation D) imposes administrative responsibilities for depository institutions, the Board of Governors, and Reserve Banks and can affect the customers of depository institutions. As previously noted, not all deposit balances are subject to reserve requirements; therefore, not all depository institutions are required by Regulation D to maintain reserves. For example, depository institutions with less than the exemption amount in net transaction deposits (which do exist in the U.S. banking system) are subject to a reserve requirement ratio of zero percent on those accounts. However, all depository institutions are required to enforce the six-transaction limit on convenient transfers and withdrawals for all accounts that they classify as savings deposits and not transaction accounts. Figure 2 outlines how depository institutions may implement the Regulation D six-transaction limit. To comply with Regulation D’s definition of “savings deposit”: Depository institutions must ensure that no more than six convenient transfers and withdrawals are made each month or statement cycle from accounts classified as savings deposits. Institutions must either prevent transfers that are in excess of the limit or monitor the accounts for compliance with the limit and contact customers who violate the limit on a more than occasional basis. Depository institutions must close the savings account and place the funds in another account that the deposit customer is eligible to maintain or take away the transfer and draft capabilities of the account, if customers continue to make more than six transfers and withdrawals per month or statement cycle from the account after they have been contacted by the depository institution. However, Regulation D neither requires depository institutions to charge customers a fee for violating the transaction limit nor prohibits institutions from charging a fee for such violations. The Federal Reserve needs accurate information on deposit balances to calculate reserve requirements, and it requires many depository institutions to submit deposit reports. (Currently, reserve requirements are calculated as a ratio of reservable liabilities). Generally, the Federal Reserve Act authorizes the Board of Governors to require reports of liabilities and assets, and the Federal Reserve requires institutions to submit a Report of Transaction Accounts, Other Deposits and Vault Cash (Form FR 2900) to gather data for the calculation of reserve requirements and to construct the monetary aggregates (e.g., M1 M2 and M3). However, the Board of Governors has reduced the reporting burden on depository institutions based on size, such that depository institutions may be required to report deposit balances annually, quarterly, weekly, or not at all. Depository institutions that have total deposits less than or equal to the exemption amount ($15.2 million in 2016) are not required to submit deposit reports (referred to as “nonreporters”). Each year, the Federal Reserve determines depository institutions’ reporting categories and the Reserve Banks inform institutions of their appropriate reporting categories. In addition to the Board of Governors, other regulators enforce depository institutions’ compliance with Regulation D’s requirements for those institutions subject to their regulatory jurisdiction. OCC, FDIC, and NCUA are responsible for supervising depository institutions’ compliance with federal laws and regulations, including Regulation D. However, officials from these regulators told us that because they employ a risk-based approach to oversight, they do not regularly conduct Regulation D- specific examinations. For example, OCC officials told us that they examine Regulation D compliance when changes in the regulation or bank policy occurs, when emerging risks in the industry have been identified, or when customer complaints about the regulation have increased. Furthermore, CFPB does not examine compliance with Regulation D specifically, but consumers may bring related complaints to CFPB’s attention through its consumer complaint database. For example, a consumer might submit a complaint to CFPB about a bounced check resulting from a transfer from a savings deposit that was denied due to the Regulation D six-transaction limit for convenient transfers or withdrawals. Based on 2015 call report data, we identified 12,135 depository institutions that were subject to Regulation D’s requirements because they offered transaction or savings deposits or both, to the general public. Overall, Fifty-three percent of the 12,135 depository institutions (banks and credit unions) were required to satisfy reserve requirements because their level of net transaction account balances exceeded the then- applicable exemption amount of $14.5 million or more in 2015. Forty-one percent of the depository institutions had transaction accounts reservable at the 3 percent reserves ratio because their level of net transaction accounts ranged from $14.5 million to $103.6 million (the low reserve tranche in 2015). These institutions did not have to satisfy reserve requirements on amounts up to $14.5 million because those amounts were subject to a zero percent reserve requirement. Twelve percent of the depository institutions had transaction accounts reservable at the 10 percent ratio because their level of net transaction accounts was greater than $103.6 million in 2015. These institutions did not have to satisfy reserve requirements on amounts up to $14.5 million because those amount were subject to a zero percent reserve requirement ratio, and they were subject to a 3 percent reserve requirement ratio on amounts greater than $14.5 million up to $103.6 million. Reserve requirements, and therefore obligations under Regulation D, affect a larger percentage of banks than credit unions in 2015. Eighty- six percent of banks were required to satisfy reserve requirements in contrast to 23 percent of credit unions. Thus, more than three-quarters of credit unions were exempt from reserve requirements because their net transaction accounts were less than the exemption amount. Furthermore, the majority of transaction, savings, and money market account balances were concentrated among banks. Most banks (65 percent) had net transaction account balances reservable at the 3 percent ratio (which applied to amounts greater than $14.5 million and up to $103.6 million in 2015). A smaller proportion of banks (20 percent) held the majority of total transaction account balances ($1.68 trillion), which were reservable at the 10 percent ratio (which applied to amounts greater than $103.6 million in 2015). Of the 23 percent of credit unions that were required to satisfy reserve requirements, the majority (79 percent) had net transaction deposits reservable at the 3 percent ratio. Depository institutions may implement Regulation D requirements using one or more of the following approaches: Satisfy reserve requirements on transaction accounts only and enforce the transaction limit on savings deposits (by requiring customers to adhere to the limit). Depository institutions offer both transaction accounts (which are subject to reserve requirements) and accounts that they classify as savings deposits (which are not subject to reserve requirements). For accounts classified as savings deposits, they ensure that customers adhere to the monthly six-transaction limit on convenient transfers and withdrawals. Depository institutions inform customers of the limit and, as previously discussed, they must prevent convenient transfers and withdrawals in excess of the limit from savings deposits or must monitor such transfers on an ex post (i.e., after the fact) basis and contact those customers who exceed the limit on a more than occasional basis. Reduce transaction account balances subject to reserve requirements and enforce the transaction limit on savings deposits (by requiring customers to adhere to the limit). Depository institutions can offer both transaction accounts and savings deposits and employ methods—such as transferring balances from transaction accounts (subject to reserve requirements) to savings deposits (not subject to reserve requirements)—to reduce balances subject to reserve requirements. They also can offer only savings deposits on which they would not be required to satisfy transaction account reserve requirements. Under this approach, depository institutions would have to enforce the transaction limit for savings deposits, ensuring that customers adhere to the monthly six- transaction limit on convenient transfers and withdrawals. Satisfy reserve requirements on balances in both transaction accounts and savings deposits that are classified as transaction accounts and avoid enforcing the transaction limit for savings deposits. Depository institutions can offer both transaction accounts and accounts called “savings deposits” to customers but classify balances in both types of accounts as transaction accounts. Transaction accounts are not subject to the transaction limit on convenient transfers and withdrawals. Therefore, institutions would not need to enforce the transaction limit on balances in accounts marketed to customers as savings deposits but are classified as transaction deposits in their deposit reports. For deposits classified as transaction accounts, depository institutions must meet applicable transaction account reserve requirements. Although the consideration and determination of deposit product offerings are complex and more driven by factors other than Regulation D’s requirements, depository institutions must balance the administrative and opportunity costs of maintaining reserves against their transaction accounts with the operational costs (and benefit) of enforcing the six- transaction limit on convenient transfers and withdrawals for savings deposits. The administrative costs of the deposit reporting that supports the implementation of reserve requirements and the publication of measures of the money supply (monetary aggregates) include preparing and submitting reports on deposit balances weekly, quarterly, or annually to the Board of Governors, and these reporting categories can change annually for institutions. The opportunity cost of satisfying reserve requirements varies based on the profitability of the alternative uses of the funds, and is essentially an implicit tax on depository institutions— often referred to as a “reserves tax.” The tax is equal to the difference between the interest paid on balances maintained at Reserve Banks and the interest those institutions could have earned on alternative investments (such as making loans to customers and collecting interest) in the absence of reserve requirements. Board of Governors officials noted that reducing this opportunity cost was one of the reasons that the Federal Reserve sought explicit authority from Congress to pay interest on reserves. This authority was granted in 2006 and made effective in 2008. According to representatives from selected depository institutions we interviewed, the operational costs associated with enforcing the transaction limit for savings deposits include acquiring and maintaining monitoring and tracking systems, training staff, and educating customers about the limit. According to Board of Governors officials, Regulation D was amended in 2012 with the goal of reducing administrative and operational costs for depository institutions and the Federal Reserve Banks. Finally, enforcing the transaction limit has the benefit of reducing the amount of liabilities subject to transaction account reserve requirements and, therefore, reducing an institution’s reserve requirement. Most depository institutions used the approach of enforcing the six- transaction limit to implement Regulation D’s requirements, and enforcement methods varied among institutions. Based on our 2015 survey, we estimate that 74 percent of depository institutions we identified as subject to Regulation D’s requirements implemented the regulation’s requirements by enforcing the six-transaction limit on convenient transfers and withdrawals from all savings deposits that they offered (one of the three approaches previously discussed). Depository institutions required to neither submit deposit reports nor satisfy reserve requirements (nonreporters) were the least likely to implement Regulation D’s requirements by enforcing the transaction limit while those subject to the 10 percent reserve requirement ratio on transaction accounts were the most likely to enforce the limit. Finally, banks were more likely than credit unions to enforce the transaction limit to implement Regulation D’s requirements, with an estimate of 89 percent versus an estimate of 59 percent. See table 4 for responses to selected survey questions (which are also discussed later), and see appendix II for results for all of our closed-ended survey questions. For accounts classified as savings deposits, Regulation D requires depository institutions either to prevent convenient withdrawals and transfers that exceed the six-transaction limit or to adopt procedures to monitor transactions ex post (i.e., after the fact) and contact customers whose accounts exceed the transaction limit on a more than occasional basis. Under the CFPB’s Regulation DD, Truth in Savings, depository institutions are required to inform customers of the transaction limit. To examine the administrative and cost burdens associated with implementing these requirements, we surveyed depository institutions. Almost all depository institutions reported on our survey that they inform customers of the transaction limit by providing a hard copy or online disclosure about the Regulation D transaction limit before a savings deposit account is opened in person or online. For accounts opened in person or online, an estimated 19 percent of depository institutions mail Regulation D disclosures to customers. Depository institutions used different methods to monitor the transaction limit. We estimate that slightly more than half of all depository institutions monitored transactions through fully automated methods using a software program. Banks and credit unions differed in how they monitored accounts to enforce the transaction limit. We estimate that 41 percent of banks monitored accounts through a fully automated method using a software program while 53 percent used an automated method for some reporting and a manual method for reviewing accounts/transactions. Conversely, we estimate that 72 percent of credit unions monitored accounts through a fully automated method using a software program and 14 percent used an automated method for some reporting and a manual method for reviewing accounts/transactions. Depository institutions that implement Regulation D’s requirements by monitoring transactions must take one of two actions if customers make more than six transfers or withdrawals per month or statement cycle on a more than occasional basis: (1) close the savings deposit account and place the funds in another account or (2) take away the transfer and draft capabilities from the savings deposit account. Institutions may also reclassify a savings deposit account as a transaction account (the effective equivalent of closing the savings deposit account and placing the funds in a transaction account). Regulation D requires depository institutions that use ex post monitoring rather than preventing excess transfers and withdrawals to adopt procedures to enforce the transaction limit. Institutions also told us that, in practice, they may close the account and send the customer a check for the funds remaining in the account (another form of account closure), or maintain the same account but indefinitely reclassify the account as a transaction account without a change in account number (account conversion). Regulation D does not prohibit employing other mechanisms to discourage transactions in excess of the transaction limit. For example, some survey respondents that used the approach of enforcing the transaction limit indicated that they charged fees as a mechanism to discourage additional transfers. That is, if a customer makes six transfers or withdrawals in savings deposits in a month or statement cycle, they allow additional transfers and withdrawals but charge a fee(s) and may temporarily reclassify the savings deposit account as a transaction account (and realize a commensurate temporary increase in their reserve requirement) in order to allow more than six transactions in a month or statement cycle (see fig. 2). According to our survey results, the majority of depository institutions charged fees when customers exceeded six or more transactions in savings deposits (savings accounts or money market accounts). Specifically, for savings accounts, we estimate that 60 percent of institutions charged fees after the sixth transaction and 7 percent charged fees after the seventh transaction. For money market accounts, an estimate of 83 percent of institutions charged fees after the sixth transaction and an estimate of 2 percent charged fees after the seventh transaction. In addition, broken out by institution type, we estimate that 63 percent of banks charged fees after the sixth transaction in savings accounts compared with an estimate of 55 percent of credit unions. We estimate that 8 percent of banks charged fees after the seventh transaction in savings accounts compared with an estimate of 4 percent of credit unions. For money market accounts, we estimate that 90 percent of banks charged fees after sixth transaction compared with an estimate of 63 percent of credit unions. However, nearly equal percentages of banks and credit unions charged fees after the seventh transaction for money market accounts. Based on our survey results, the estimated median fee amounts reported by all depository institutions were about $3 for savings accounts and $5 for money market accounts. Banks tended to report lower fees for savings accounts, with an estimated median of about $2 for banks and an estimated median of about $4 for credit unions. The median fee for money market accounts was about $5 for both banks and credit unions. In addition, we estimate that about a quarter of all depository institutions prohibited the seventh transaction when the transaction limit was reached in a month or statement cycle. However, more credit unions than banks (an estimated 54 percent versus an estimated 6 percent for savings accounts and an estimated 55 percent versus an estimated 4 percent for money market accounts) prohibited the seventh transaction when the six- transaction limit was reached. Depository institutions also reported on the change in costs associated with monitoring accounts to enforce the Regulation D transaction limit, such as establishing and maintaining information technology and creating disclosure form letters (to comply with the CFPB’s Regulation DD requirements). According to our survey results, most depository institutions reported that costs associated with monitoring accounts for compliance with the transaction limit stayed the same over the last two years. However, we estimate that 20 percent of depository institutions had their costs increase over the same time period. For those depository institutions that indicated that their costs increased, reasons commonly cited included institutional growth, information technology and software costs, mailing costs (related to complying with the CFPB’s Regulation DD requirements), and staff time to review automated compliance reports. Depository institutions reported that steps taken to enforce the transaction limit contribute to operational burden or challenges. (Institutions have noted publicly that operational burden is created by the need to monitor accounts as required by Regulation D, create and mail disclosure forms required by the CFPB’s Regulation DD, and inform customers of the transaction limit as required by the CFPB’s Regulation DD.) For those depository institutions we surveyed that indicated there were challenges associated with monitoring and enforcing the transaction limit, the challenges most cited were getting customers to read their Regulation D notices (82 percent), operational challenges such as creating forms and converting and closing accounts (68 percent), and addressing customer complaints related to the six-transaction limit (64 percent). About equal percentages (and numbers) of credit unions and banks (84 percent versus 80 percent) cited getting customers to read Regulation D notices as a challenge. However, more banks than credit unions (78 percent versus 55 percent) reported operational challenges related to creating forms and converting and closing accounts, and more credit unions than banks (76 percent versus 55 percent) cited addressing customer complaints as a challenge. Retail Sweeps to Reduce Transaction Accounts Subject to Reserve Requirements or to Allow Unlimited Access to Savings Deposits A retail sw eeps (also known as deposit reclassification) program is an arrangement in w hich a depository institution divides a customer’s transaction account into tw o legally distinct subaccounts—a transaction account subaccount and a savings deposit subaccount—for deposit reporting and reserve requirements purposes. The program does not affect the customer’s use of the account. The depository institution automatically transfers funds between a customer’s transaction (i.e., checking) subaccount and savings deposit subaccount so that there are not more than six transfers per month from the savings deposit subaccount to the transaction account subaccount. This is typically done to reduce transaction account reserve requirements, or in some cases to exempt customers from the six-transaction limit and allow unlimited access to savings deposits. Based on our survey of depository institutions, we estimate that 9 percent of depository institutions reduced transaction account reserve requirements by using a retail sweeps program to automatically transfer balances from transaction accounts to savings deposits (one of the approaches described previously). See the sidebar for further details on retail sweeps programs. As discussed previously, depository institutions’ deposit liabilities maintained as vault cash or in accounts at Reserve Banks to satisfy reserve requirements cannot be used for other purposes, such as loans or securities holdings, to generate higher returns than those obtained through the Federal Reserve’s payment of interest on reserve balances. Before 2008, reserve requirements led institutions to expend resources and efforts to reduce transaction account balances subject to reserve requirements. According to some estimates, the cumulative amounts swept within retail sweeps programs grew from $5 billion in 1994 to $800 billion in 2008. In October 2008, the Federal Reserve Act amendment authorizing the payment of interest on reserve balances became effective. Currently, the Federal Reserve’s payment of interest on reserve balances has significantly reduced the reserves tax associated with reserve requirements and, therefore, the incentives of depository institutions to engage in activities to minimize reserve requirements. Broken out by type of institution, we estimate that 15 percent of banks and 2 percent of credit unions employed the approach of reducing transaction account reserves. Of the institutions that used a retail sweeps program to reduce transaction account reserve requirements, about a fourth (27 percent) had transaction account balances subject to the 10 percent reserve requirement ratio. Reasons cited by individual respondents for employing this strategy (at the time of implementation) included a high interest rate environment and the amount of transaction account balances (for instance, balances grew into a higher reserve tranche). Prior to the payment of interest on reserves and the expansion of the total supply of reserves, depository institutions preferred to closely manage their reserve balances to their reserve requirement; therefore, having fewer deposit balances subject to reserve requirements reduced their reserve requirement and made more funds available for alternative investments, such as loan provision to customers. Because they shifted funds from transaction accounts to savings deposits to reduce balances in transaction accounts subject to reserve requirements, these institutions had to enforce the transfer and withdrawal limit for savings deposits (ensuring that the automatic transfers from savings deposits did not exceed six times per month). However, as described in the sidebar, the customer is not able to initiate transfers directly from savings deposits in retail sweeps programs. Therefore, for institutions that used a retail sweeps program, no ex post monitoring to enforce customers’ adherence to the six-transaction limit was required. A few depository institutions said that they satisfied reserve requirements for balances in transaction accounts and savings deposits to implement Regulation D requirements and avoid enforcing the transaction limit for savings deposits (one of the aforementioned approaches). As previously discussed, institutions can choose to market accounts as savings deposits to customers but classify them as transaction accounts for deposit reporting and reserve requirements purposes. Because transaction accounts are subject to reserve requirements, in effect, this approach results in institutions satisfying reserve requirements on both transaction accounts and savings deposits. This also means that institutions would not have to enforce the transaction limit otherwise necessary to classify an account as a “savings deposit” and can permit customers to make more than six convenient transfers and withdrawals from their savings deposits. A slight variation to the approach that allows customers unlimited access to their savings deposits is the use of a retail sweeps program whereby institutions would satisfy reserve requirements for balances in savings deposits only when customers exceed the transaction limit. To use a retail sweeps program to eliminate the need to enforce the transaction limit, an institution transfers some or all of its customers’ balances from savings deposits (no reserves required) to transaction accounts (reserves required) once six transfers or withdrawals are made from the savings deposits. The reasons depository institutions cited for maintaining reserves on savings deposits (i.e., classifying both transaction accounts and savings deposits as transaction accounts) to eliminate the need to enforce the transaction limit included: (1) net transaction account balances were low enough that holding additional reserves did not increase the institution’s required reserve ratio, (2) customer feedback (questions or concerns about the transaction limit), and (3) the Federal Reserve’s payment of interest on reserves. Even with the payment of interest on reserve balances, administrative burdens associated with reserve requirements remain for depository institutions. As mentioned previously, depository institutions face administrative burdens associated with classifying deposit liabilities subject to reserve requirements, calculating and reporting deposit levels, and ensuring they maintain enough vault cash or balances at Federal Reserve Banks to meet their reserve requirements. In addition, because reserve requirements only apply to depository institutions and do not apply to nondepository financial institutions, a potential competitive disadvantage for depository institutions exists. This can distort the credit allocation process by pushing financial resources away from the banking system. Based on our survey results, we estimate that relatively few customers exceeded the Regulation D six-transaction limit. Additionally, relatively few customers had questions or concerns about the limit, consistent with the findings from our review of regulatory data. This low occurrence of customers’ savings deposits exceeding the transaction limit may be due, in part, to depository institutions’ efforts to inform customers about the transaction limit. In addition, the questions and concerns depository institutions received from customers were generally about lack of understanding that a transaction limit applied to their accounts. Based on our survey results, relatively few customers appeared to have exceeded the six-transaction limit in their savings deposits. Based on responses to our survey, we estimate the following: For the majority of depository institutions (59 percent) that enforced the transaction limit, less than 1 percent of their customers’ savings deposits—combined savings and money market accounts—exceeded the transaction limit in a month or statement cycle (between December 2014 to March 2016). For two-thirds (66 percent) of banks, few customers’ savings deposits exceeded the limit while about half of credit unions noted this. Twenty-one percent of depository institutions said that 1–5 percent of customers’ savings deposits exceeded the transaction limit, which was consistent with percentages for banks and credit unions. For 24 percent of banks and 18 percent of credit unions, 1-5 percent of customers’ savings deposits exceeded the transaction limit. Few (1 percent) depository institutions indicated that more than 10 percent of their customers’ savings deposits exceeded the transaction limit in a month or during a statement cycle. Based on our survey results and interviews, some depository institutions took additional steps on their own, beyond those required by Regulation D, to help customers stay informed and avoid exceeding the transaction limit. For example, we estimate that 7 percent of depository institutions’ staff made customer service calls after an account is opened to advise customers of the limit and answer questions. Ten percent of institutions notified customers of the number of transactions made before their accounts reached the transaction limit. Methods that individual institutions reported using to notify customers before they reached the limit included mailing notification letters, texting alerts, calling customers, and providing ATM alerts. In addition, representatives from depository institutions we interviewed said that they notified customers by labeling applicable transactions as a Regulation D transaction online (viewable as customers’ account activity). Representatives from one depository institution we interviewed also told us that they temporarily classified savings deposits as transaction accounts (for reserves purposes) to permit customers to make unlimited transfers and withdrawals from such accounts for a limited time during tax season. Based on our survey, we estimate that most depository institutions received few customer questions or concerns related to the Regulation D transaction limit. Specifically, the Regulation D-related questions or concerns represented approximately less than 1 percent of all questions or concerns about deposit accounts received from customers. This low rate of customer questions or concerns was more common among banks than credit unions, with estimates of 72 percent versus 55 percent, respectively. For the depository institutions that indicated less than 1 percent of customer questions or concerns were related to Regulation D, the complaints were generally about lack of understanding about the types of transactions subject to the limit (26 percent), lack of understanding that a transaction limit applied to their accounts (26 percent), and fees charged (12 percent). In addition, we estimate that for a relatively sizeable minority of depository institutions (22 percent), 1– 10 percent of questions or concerns they received were related to Regulation D. For these institutions, the Regulation D-related questions or concerns were generally about customers’ lack of understanding that a transaction limit applied to their accounts (36 percent) and the types of transactions subject to the limit (28 percent). Generally, depository institutions perceived the burden that Regulation D’s requirements placed on customers to be minimal based on the feedback they received from customers. As previously discussed, depository institutions must close or convert savings deposits (to transaction accounts) or remove transfer or withdrawal capabilities from savings deposits for customers who continue to violate the six-transaction limit after having been contacted by the depository institution for exceeding the transaction limit on a more than occasional basis. Based on our survey results, we estimate that, from December 2014 to February 2016, around 35 percent of depository institutions received customer questions or concerns about account closures (closing accounts and transferring the funds into a new transaction account or sending a check to the customer) or denied transactions. Account closures could result in new account numbers (which can interrupt how customers manage their accounts), and denied transactions may cause inconvenience for customers—both of which could lead to customers expressing questions or concerns related to the Regulation D transaction limit. We estimate that the depository institutions that received customer questions or concerns about account closures or denied transactions generally did not convert customers’ savings deposits when they more than occasionally exceeded the transaction limit. The institutions that did not receive customer questions or concerns about account closures or denied transactions may have converted accounts. Overall, we estimate that about 30 percent of depository institutions converted savings accounts, and 47 percent converted money market accounts when a customer more than occasionally exceeded the transaction limit. Those institutions that converted accounts may have matched customer needs with appropriate deposit account products and thus may have removed a basis for questions or concerns. Findings from other sources about the effect of Regulation D’s transaction limit on customers were consistent with our survey results. We reviewed complaint data collected by the Board of Governors, CFPB, FDIC, NCUA, and OCC. The regulators’ data indicated that less than 0.5 percent of all deposit product complaints pertained to Regulation D. In the case of NCUA, we did not find any complaints collected that pertained to Regulation D. Complaint data from some of the regulators also indicated that the Regulation D-related complaints generally were about fees charged or a lack of understanding about the transaction limit. In one instance, representatives from a depository institution we interviewed also told us that a change in policy (from classifying savings deposits as transaction accounts and maintaining reserves against them to avoid enforcing the transaction limit to classifying those accounts as savings deposits and enforcing the transaction limit) caused confusion among their customers and prompted complaints. In addition, representatives from three trade associations for banks and credit unions told us that the feedback they have received from their members about Regulation D was generally about customers’ confusion and lack of understanding of the regulation’s requirements. Specifically, they said that customers of banks and credit unions do not understand why there is a limit on the number and types of transactions that can be made from their savings deposits. Internationally, many central banks have taken steps to reduce their reliance on reserve requirements, including eliminating them completely in some cases. Many of the approaches that facilitate monetary policy implementation in an environment of low or no reserve requirements have been employed by central banks in other countries for a number of years. While the ability to extend these experiences to the United States context is unclear, they provide examples of monetary policy implementation frameworks that do not involve mandatory reserve requirements. However, in reducing reserve requirements to zero, a number of potential operational and technical issues emerge that can complicate that conduct of monetary policy. The Federal Reserve has acknowledged the costs and burden associated with reserve requirements and has evaluated, and continues to evaluate various monetary policy implementation frameworks. Using this work and the experiences of foreign central banks, we outline various reserve requirement frameworks for illustrative purposes, including those where reserve requirements are nonexistent. GAO’s presentation and discussion of the various frameworks should not be interpreted as a judgement or policy position on monetary policy implementation or related decisions in the United States, where, like many other countries, reserve requirements remain an important monetary policy tool. Due, in part, to concerns about the cost, burdens and market distortions associated with their use, there has been a decline in the level and use of reserve requirements globally. For example, while internationally most central banks still impose reserve requirements, the requirements have been reduced over the last several decades in some high- and moderate- income countries. As a result, several central banks now operate monetary policy in an environment in which reserve requirements are zero or so low that they do not constrain the behavior (lending or investment activity) of depository institutions. A 2010 International Monetary Fund (IMF) survey found that 9 of 121 central banks did not impose reserve requirements on financial institutions. Other developed and emerging countries have operated with a reserve requirement imposed on all deposits, which eliminates the need for measures like transaction limits on certain kinds of transfers and withdrawals from certain deposit liabilities to distinguish between reservable and nonreservable deposits. As mentioned previously, the Federal Reserve Act authorizes the Board of Governors to impose reserve requirements on a narrow base of deposit liabilities and does not authorize imposing reserve requirements on savings deposits. Among central banks that operate with a range of reserve requirements like the Federal Reserve, some have requirements that appear relatively small. For example, as table 5 illustrates, reserve requirement ratios in Japan range from .05 percent to 1.3 percent depending on the type of bank liability (deposit account). However, because the reservable base (types of accounts covered) differs across central banks, reserve requirement levels across central banks are not strictly comparable without accounting for these differences, which can be large. Although reserve requirement ratios in the United States have not changed since 1992, the Board of Governors has acknowledged the costs associated with the mandatory reserves framework, including the required differential treatment of transaction and other types of accounts. In fact, the Federal Reserve had advocated for legislation authorizing it to pay interest on reserves to eliminate some of the costs on depository institutions well before the passage of the Financial Services and Regulatory Relief Act of 2006 (FSRRA). As discussed earlier, FSRRA permitted the payment of interest on reserve balances and gave the Board of Governors greater flexibility in establishing reserve requirements, including the ability to reduce transaction account reserve requirement ratios to zero. Like the Federal Reserve, central banks in some other countries also remunerate reserve balances, including the Bank of England, the European Central Bank, Norges Bank (Norway), Bank of Canada, and Bank of Japan, and some have done so for many years. While central banks generally view reserve requirements as an important monetary policy tool, changes to reserve requirement ratios are seldom used in the day-to-day operations employed by a central bank to achieve monetary policy objectives. This is largely because direct manipulation of reserve requirements is a less efficient way for a central bank to influence economic activity relative to the other tools at its disposal, such as open market operations. Additionally, changes to reserve requirement ratios can significantly affect a depository institution’s operations. However, reserve requirements have played an important role in the implementation of monetary policy, principally facilitating various operating procedures and supporting open market operations by creating a predictable demand for reserves. Operating procedures refer to the day-to-day policy actions, or tactics, a central bank employs to achieve its long-run monetary policy objectives, including technical measures and administrative activities. The key component of an operating procedure is the central bank’s operating target, which can be a price (short-term interest rate) or a quantity (reserves) target. For example, when a central bank uses monetary policy tools to target interest rates, it is said to be employing an interest rate operating procedure. The importance of reserve requirements varies in relation to the operating procedure used to implement monetary policy objectives. For example, a reserve operating procedure—which most central banks moved away from decades ago—depends on having control over the supply of money. Reserve requirements played a role in controlling such growth when these procedures were in place (see table 6). When a reserves-based operating procedure is in place, control over the money supply suffers when reserve requirements are lowered. Most central banks in developed countries abandoned these approaches by the early 1980s for interest rate-based operating procedures—a common approach to monetary policy that involves targeting the level of a short-term interest rate to achieve policy objectives. For example, the Federal Reserve transitioned to targeting an interest rate (federal funds rate) roughly by 1984. The movement away from reserves operating procedures and the adoption of interest rate targeting approaches has allowed some central banks to reduce their reliance on reserve requirements and eliminate them in some instances. As shown in table 6, interest rate-based operating procedures provide central bankers with more flexibility in the use of reserve requirements. In the interest rate-based operating procedure employed in the United States during 1984–2008 (single target approach), the Federal Reserve forecasted the demand for reserves, and then supplied the quantity of reserves necessary to clear the federal funds market at the interest rate target set by FOMC. For this type of operating procedure to be effective, the demand for central bank balances must be reasonably stable and predictable. Reserve requirements, by helping to establish a stable and predictable demand for such balances, ensure that changes in short-term interest rates are primarily the result of the central bank’s actions to achieve a policy rate within the interest rate target range. While the amount of central bank balances needed for payment and settlement purposes is determined by financial institutions’ needs and can vary significantly from day to day, reserve requirements that are substantial enough—relative to other factors influencing demand—ensure that a stable and predictable level of demand for central bank balances. Therefore, reserve requirements facilitate interest-rate operating procedures and remain an important tool for establishing control over short-term interest rates. Other central banks use alternative interest-rate operating strategies to achieve monetary policy objectives, using frameworks that place less reliance on reserve requirements or render them nonessential (see table 6, corridor approach). Central banks generally have several tools— outside of reserve requirements—that they can use to influence conditions in interbank markets. Approaches that use these tools to establish upper and lower limits on a short-term interest rate (policy rate) are referred to as “channel” or “corridor” operating frameworks. In general, fluctuations in the targeted interest rate are bounded within a “corridor” by a rate of interest on central bank lending and a rate of interest on balances held at the central bank. Open market operations can also be conducted as necessary to keep the market interest rate near the target. When properly constructed and calibrated to the institutional features and financial market structures of a given economy, this “corridor” operating procedure can allow a central bank to achieve an operating target rate without the cost and administrative burdens associated with reserve requirements. Hence, the corridor operating framework can be viewed as an alternative to the required maintenance of reserves. Currently, several central banks without reserve requirements use corridor operating frameworks, including the central banks of the United Kingdom, New Zealand, Canada, Sweden and Australia and were able to achieve sufficient control over short term interest rates. (See app. IV for additional details on select corridor operating frameworks used globally.) It is important to note that reserve requirements can be useful even in a corridor system as they can smooth daily shocks and be leveraged to limit volatility within the corridor without relying on daily open market operations. The European Central Bank operates a corridor operating framework with reserve requirements. Key aspects of the Federal Reserve’s current monetary policy approach—which leverages its ability to pay interest on reserves—are similar to the approaches used by many central banks that operate without reserve requirements. In 2008, because of the increase in the size of its balance sheet, the Federal Reserve had difficulty influencing the federal funds rate through the typical shifts in the supply of reserves, resulting in a revision in its operational framework for implementing monetary policy. The Federal Reserve’s response to the financial crisis resulted in a significant expansion of its balance sheet and hundreds of billions in excess balances in the banking system. For example, at the end of 2008, balances maintained to satisfy reserve requirements amounted to $53.6 billion, while excess balances totaled $767.3 billion and by March 2016 excess balances increased to $2.4 trillion. This limited the Federal Reserve’s ability to achieve the target set by FOMC solely by varying the supply of reserves. Given its expanded authority granted in 2008, the Federal Reserve was able to use the rate of interest on reserves, with support from other monetary policy tools, to help establish a lower bound on the federal funds rate to keep it closer to FOMC’s target. This approach allowed the Federal Reserve to expand its balance sheet to promote financial stability while maintaining control over the federal funds rate. While the Federal Reserve’s revised operational approach leverages the tools used by central banks operating without reserve requirements, it was designed specifically to enhance its control over the federal funds rate within a framework using reserve requirements. (See app. IV for more information on the current operational framework in the United States.) Implementing monetary policy without reserve requirements raises a number of structural, operational, and technical issues, in cases in which the operational approach (corridor operating approach) theoretically allows for such an implementation, as the following examples illustrate. Because of the unique features of the U.S. financial system, it is unclear that the practices used by other countries would translate to the United States. Specifically, the unique structure and size of U.S. financial markets—characterized by a large number of depository institutions and by important nondepository institutions that account for a large share of credit intermediation—would influence the structure of a corridor operating approach in the United States. For example, while operating without reserve requirements would likely require the payment of interest on reserves to influence market behavior, some important nondepository participants in the federal funds market are not eligible to receive interest on reserves. Therefore, as discussed below, a corridor operating approach in the United States could be more challenging to achieve and would need to be structured to account for nondepository institution activity in the short-term funding markets. Moreover, eliminating reserve requirements and the infrastructure for their implementation would result in less flexibility for changing operating approaches (such as approaches that rely on scarce reserves and calibrated open market operations) should a central bank find such approach desirable. Also, the federal funds market could be significantly affected and could experience illiquidity in some economic environments, which could complicate monetary policy approaches reliant on federal funds rate targeting. Eliminating reserve requirements could require more joint decision making between FOMC and the Board of Governors. FOMC determines the direction of monetary policy, but by statute, the Board of Governors has the authority to set the rate for interest on reserves, which is critical to the corridor operating approach. Federal Reserve staff noted that FOMC and the Board of Governors cooperate closely in all aspects of the conduct of monetary policy, which effectively eliminates any coordination concerns. A number of operational and technical issues would emerge in a zero reserve requirement environment, particularly if balances in the banking system were scarce or if excess balances consisted largely of payment and clearance balances (funds used to make payments to other banks). When reserve requirements are set at zero or so low that they are nonbinding on the behavior of depository institutions, central bank balances will be maintained to make payments, settle and clear transactions, and meet liquidity needs. While central banks can still influence short-term interest rates in such an environment, balances held for these purposes are behaviorally different than balances maintained to satisfy reserve requirements and can be less stable and predictable and potentially less sensitive to interest rates. All else equal, in the absence of the payment of interest on reserves, these factors would result in greater volatility of short-term interest rates, underscoring the importance of the payment of interest on reserves in any framework with low or zero reserve requirements. Also, the experiences of foreign countries relying on settlement balances to implement monetary policy suggest the structure of the payment system can become more important in the conduct of monetary policy. The implementation of monetary policy in an environment of low or zero reserve requirements raises particular concerns about potential short- term interest rate volatility. A key objective in an interest rate targeting procedure—which includes corridor operating approaches—is to limit the volatility of the interest rate around the targeted level because of the cost associated with the signals it sends to market participants about the ability of the central bank to achieve its target. A relatively stable overnight interest rate (e.g., federal funds rate) enhances monetary policy transmission and transparency and maximizes a central bank’s influence on market expectations. If the experiences of other central banks are a guide, the absence of reserve requirements would likely involve implementing or retaining one or more of the following actions or structures to limit interest volatility and better ensure the effectiveness of a corridor operating framework: Large central bank balances even in normal times. One way to limit interest rate volatility is to set the interest paid on reserves near or equal to the target rate and supply sufficient reserves to match demand at a level consistent with monetary policy objectives. This variant of the corridor operating framework is referred to as a “floor system” (or asymmetric corridor) and is used by the Reserve Bank of New Zealand and several others. Nevertheless, this may require a larger Federal Reserve balance sheet than might otherwise exist with the current mandatory reserve requirements framework. Standing facilities serving depository and nondepository institutions. Having Federal Reserve facilities that are available to a broad array of financial institutions would be important for avoiding cases in which the federal funds rate can fluctuate outside of the established corridor. FSRRA (which amended the Federal Reserve Act) did not authorize the payment of interest on reserves to nondepository participants, such as the government-sponsored enterprises (GSEs), in the federal funds market. Because GSEs do not receive interest on their reserves held at Federal Reserve Banks, they are willing to lend their balances at rates below the interest rate paid on reserves. As a result, transactions by entities such as GSEs may have contributed to the federal funds rates falling below the interest rate on reserves, which in a corridor approach is designed to contain the federal funds rate from below. To help control the federal funds rate and keep it in the target range set by FOMC, the Federal Reserve established a supplementary tool—the overnight reverse repurchase agreement facility. The facility is available to an expanded list of counterparties, which limits the incentives for GSEs and other qualified institutions to lend funds at rates less than the Federal Reserve’s overnight reverse repurchase agreement rate. Therefore, the interest rate offered in reverse repurchase agreements is set lower than the interest on reserves and helps establish a floor for the federal funds rate. More frequent market intervention. Depending on the type of the corridor operating system in place and the tolerance for interest rate volatility, open market operations may be needed at higher frequencies and in larger magnitudes to steer the market interest rate to the targeted rate. Policies to reduce any perceived stigma associated with borrowing from the discount window. Because of the perceived stigma associated with using the discount window, the federal funds rate might not be effectively bounded from above by the discount rate in a corridor system. The reluctance by institutions to use the discount window is driven by institutions’ fears that such borrowing would signal financial weakness to financial markets. As a result, banks might be willing to borrow elsewhere at rates outside of the targeted range, contributing to interest rate volatility. International experiences with a wide range of reserve requirement frameworks and the challenges associated with them illustrate that consensus has not emerged on what constitutes the optimal role of reserve requirements in monetary policy implementation. The corridor operating frameworks employed in other countries theoretically would allow for the simplification or elimination of reserve requirements and therefore some of the cost associated with them, including a central bank’s administrative overhead. However, determining the ideal reserve requirement framework in the United States is ultimately a monetary policy decision and we take no position on the suitability of the frameworks presented for the United States. For illustrative purposes only, we differentiated a few reserve requirement frameworks, focusing narrowly on the main cost and administrative burdens associated with each and the implications for monetary policy (see table 7). The examples are informed by the experiences of several foreign central banks and the analysis conducted by the Federal Reserve prior to the adoption of the payment of interest on reserve balances. In response to the authorities granted by FSRRA to pay interest on reserves and reduce transaction account reserve requirements to zero in 2008, the Federal Reserve began studying a broader range of options to achieve monetary policy objectives. FOMC meeting minutes of July 2015 indicated that the Federal Reserve would be evaluating potential long-run. implementation frameworks for monetary policy and would be considering a wider range of issues than those considered in 2008. Key features of the alternative frameworks (B through E in table 7) are the remuneration of reserves (payment of interest on reserves) or the elimination of reserve requirements. While the ability to lower or eliminate reserve requirements ultimately depends on the monetary policy strategy pursued by the central bank, the alternatives include: Reserve requirements on transaction accounts only, with payment of interest on reserves. Because reserves are required on transaction accounts only, this approach includes a reliance on distinguishing between accounts (e.g., with the six-transaction limit for savings deposits in the United States). This approach retains some of the costs and burdens associated with reserve requirements, but it reduces or eliminates the implicit reserves tax (through the payment of interest on balances held at Reserve Banks). Lower reserve requirements on all accounts, with interest on reserves. This approach introduces administrative simplicity by eliminating the need to distinguish between transaction accounts and savings deposits and reduces the implicit reserves tax (through the payment of interest on reserves) and the associated burdens on depository institutions and the central bank. The European Central Bank has used this framework with a corridor operating procedure for monetary policy implementation. In the United States, this option would require legislative change as the Federal Reserve Act authorizes the Board of Governors to impose reserve requirements on transaction accounts, nonpersonal time deposits and Eurocurrency liabilities only. Zero reserve requirements, with voluntary reserves obligations. While reserves are not required, depository institutions would be allowed to set their own reserve targets—receiving a higher rate of interest on balances held to meet that target—and penalized for failing to meet them. This framework would be less burdensome on depository institutions than one requiring reserves such as in the United States since it would eliminate the need for some administrative reports and the monitoring of deposits for compliance with the six-transaction limit. The central bank’s administrative overhead could decline as well, although it would still need to monitor reserve targets and balances, and maintain corresponding systems. This option approximates the approach that has been employed by the Bank of England (with a corridor operating procedure for monetary policy implementation). Zero reserve requirements. Regulation D’s definitions that define reserveable liabilities for reserve requirements purposes in the United States would not be needed and all costs and administrative burdens associated with reserve requirements would be eliminated. Some provisions of Regulation D relating to deposit reporting, however, would likely still be necessary, together with any deposit definitions necessary to support the reporting of different kinds of deposits, to continue to permit construction of the monetary aggregates. In cases we reviewed in which central banks implemented monetary policy without reserve requirements, corridor operating frameworks were used to achieve monetary policy objectives. To the extent that reserve requirements place significant burdens on banks and central banks, eliminating reserve requirements and employing a corridor operating approach appear to be feasible alternatives in these select cases. However, the feasibility of this approach in the United States is an open question. (See discussion above on operational and technical issues that could emerge in such an environment.) As a benchmark comparison, table 7 also includes the reserve requirement framework in place in the United States before October 2008 (framework A) in which the six-transaction limit is necessary and interest is not paid on reserves. It is the most administratively burdensome of the reserve requirement regimes illustrated. In contrast, options C, D, and E, result in cost and burden reductions for both depository institutions and the central bank. However, the approaches described have important implications for the implementation of monetary policy—which is the only authorized rationale for reserve requirements in the United States under the Federal Reserve Act. As a result, the ability to minimize the cost and administrative burdens associated with reserve requirements is ultimately constrained by the monetary policy consequences of these changes. We provided a draft of this report for review and comment to the Board of Governors of the Federal Reserve System (Board of Governors), the Comptroller of the Currency (OCC), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), and the Consumer Financial Protection Bureau. The Board of Governors, OCC and FDIC provided technical comments, which we incorporated into the report as appropriate. We received formal comments from NCUA that are reprinted in app. V. NCUA recognized the complexities to changing the current reserve requirement framework and the potential impact on implementing monetary policy. CFPB did not provide comments on a draft of this report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Board of Governors, OCC, FDIC, NCUA and interested congressional committees and members. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.This report is also available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. The objectives of this report were to examine: (1) the purpose of reserve requirements and Regulation D; (2) how depository institutions implement Regulation D’s requirements and the effect of the regulation on operations; (3) the effect on customers of the Regulation D transaction limit on certain transfers and withdrawals from savings deposits; and (4) foreign central banks’ varying dependence on reserve requirements and the monetary policy implications. We did not include within our scope an assessment of depository institutions’ compliance with Regulation D. For the purposes of this report, we define depository institutions as commercial and savings banks (banks) and credit unions that offer at least one type of deposit product (savings deposits, including savings accounts and money market deposit accounts, or checking transaction accounts). To address our first objective, we reviewed relevant statutes, Regulation D, and agency publications as well as interviewed officials from the Board of Governors of the Federal Reserve System (Federal Reserve). To address our second objective, we surveyed a generalizable sample of depository institutions in the United States. We identified a population of 12,135 depository institutions that we defined as subject to Regulation D’s requirements in 2015; that is, those that offered transaction accounts, savings deposits, or both. (See app. II for aggregate results of responses to the closed-ended questions on our survey.) To address our third objective, we reviewed consumer complaint data from the federal financial regulators: the banking regulators—Federal Reserve, Office of the Comptroller of the Currency (OCC), Federal Deposit Insurance Corporation (FDIC), and National Credit Union Administration (NCUA)— and the Bureau of Consumer Financial Protection, also known as the Consumer Financial Protection Bureau (CFPB), for 2010 to 2015 and interviewed agency officials. We supplemented and corroborated these data with responses to our survey of depository institutions. In addition, we reviewed our reports on financial literacy and consumer protection. For our second and third objectives, we also interviewed agency officials and selected representatives from 10 depository institutions that included banks and credit unions of different sizes, industry associations (the National Association of Federal Credit Unions, Credit Union National Association, American Bankers Association, Consumer Bankers Association, and Independent Community Bankers of America), a consumer advocacy group (U.S. Public Interest Research Group), and a financial services technology firm (CetoLogic). We selected banks and credit unions to obtain variation in size and institution type, and we interviewed industry associations that were nationally representative of depository institutions or consumers or had a national focus. Finally, to examine foreign central banks’ varying dependence on reserve requirements and the monetary policy implications, we reviewed academic literature and Federal Reserve publications on the role of reserve requirements and other tools in conducting monetary policy, recent innovations in the conduct of monetary policy that may change that role, and other developed countries’ approaches to implementing monetary policy. We also interviewed Federal Reserve officials and other experts on monetary policy. We determined that the total deposit account balances and the consumer complaint data used in our analysis were sufficiently reliable for the purposes of our reporting objectives. All dollar values for account balances are nominal (unadjusted for inflation). Our data reliability assessment included reviewing relevant documentation, conducting interviews with knowledgeable officials at the Federal Reserve, FDIC, OCC, NCUA, and CFPB, and conducting electronic testing of the data to identify obvious errors or outliers. To inform our methodology approach and our survey development, we conducted interviews with representatives from five selected depository institutions. From these interviews, we gathered information on depository institutions’ experiences implementing Regulation D’s requirements, including methods used to implement Regulation D’s requirements and how the methods affect their operations and customers, if at all. We selected institutions to obtain variation in asset size (small and large) and type of institution (bank, credit union, and online depository institution). We interviewed representatives from a small credit union, a large credit union, a small bank, a large bank, and an online bank. To obtain information on how depository institutions implement Regulation D’s requirements and the effect on their operations and customers, we administered a web-based survey to a nationally representative sample of banks and credit unions. In the survey, we asked banks and credit unions about their approach to implementing reserve requirements, their Regulation D monitoring and enforcement methods, and the effect of monitoring and enforcement methods on their operations and customers. Due to insufficient sample sizes, we were not able to report results by all subpopulations. We administered the survey from December 2015 to February 2016. Aggregate responses for all close-ended questions from the survey are included in appendix II. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question or sources of information available to respondents can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing the results to minimize such nonsampling error (see below). We identified the population of depository institutions that we defined as subject to Regulation D’s requirements using Reports of Condition and Income (call report) data for second quarter 2015 from the Federal Financial Institutions Examination Council (FFIEC), FDIC’s Statistics on Depository Institutions (SDI), and NCUA. FFIEC was our primary source for bank and thrift call report data because it includes all call report fields, such as information on the fraction of deposits in accounts intended primarily for individuals or households. FFIEC data are also released more quickly than SDI data and therefore had the most current data at the time of our sample selection (second quarter 2015). We did, however, use bank demographic data from SDI (as of first quarter 2015) since this information was not available in the FFIEC data. NCUA was our sole source of data on credit unions. FFIEC compiles call report data for FDIC-insured institutions that includes all of the call report fields. The FFIEC data do not have information about the institutions’ charter classification and their regulator. SDI reports data on a subset of call report fields and includes demographic data on institutions. These data include information on the bank holding company, primary regulator, and bank charter classification. There is also data on the primary specialization of the institution and whether it has trust powers, is organized as a Subchapter S corporation, or is a Mutual Association. We used the call report data from FFIEC and merged onto it the demographic data provided by SDI, in order to create the initial sample frame of FDIC-insured institutions. When building the dataset, FFIEC’s most recent quarter was second quarter 2015 (2015Q2), but SDI’s was first quarter 2015 (2015Q1). Therefore, we used the 2015Q2 data for FFIEC and we pulled forward the demographic data from SDI. The most recent NCUA data on credit unions were from 2015Q1. We added the NCUA data to the rest of the call report data as completely as was feasible. Finally, we added to the data on depository institutions the Federal Reserve’s tranche cut-offs. Our initial population list contained a total of 12,135 depository institutions that we defined as subject to Regulation D’s requirements. We stratified the population using two design variables—one for the type of depository institution and the other for the level of required reserves ratio. The depository institution variable had two levels (bank and credit union) while the required reserves ratio variable had four levels (nonreporter, 0 percent required reserves ratio, 3 percent required reserves ratio, and 10 percent required reserves ratio). This resulted in 8 sampling strata. Our initial sample size allocation was designed to achieve a margin of error no greater than plus or minus 10 percentage points for an attribute level at the 95 percent level of confidence. Before and during the administration of our survey, we identified a total of 52 depository institutions that were either no longer in business or that had been bought and acquired by another depository institution. We treated these sample cases as out-of- scope; this adjusted our final sample size to 892. We obtained an unweighted survey response rate of 71 percent. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Confidence intervals are provided along with each sample estimate in the report. All survey results presented in the body of this report are generalizable to the estimated population of 11,953 in-scope depository institutions, except where otherwise noted. To inform the development of our survey instrument, we met with individual banks and credit unions. We conducted 11 pretests with 8 banks and credit unions to ensure that survey questions were clear, to obtain any suggestions for clarification, and to determine whether representatives would be able to provide responses to questions with minimal burden. We also interviewed the federal banking regulators—the Federal Reserve, FDIC, OCC, and NCUA—as well as bank and credit union associations to obtain their perspectives on depository institutions’ experience with Regulation D. To encourage survey participation, we conducted pre-administration notification and followed up with depository institutions. Before administering the survey, we obtained contact information (phone numbers and e-mail addresses) for the sample of depository institutions from their primary regulators. We then sent notification e-mails to these institutions, and for those whose e-mails were undeliverable, we called representatives to correct the e-mail addresses and confirm the points of contact. During survey administration, we called sampled institutions that had not completed the survey (nonrespondents) to update their contact information, answer any questions or concerns they had about taking the survey, and obtain their commitment to take the survey. We also sent e- mails and letters to nonrespondents with instructions for taking the web- based survey. We conducted this performance audit from February 2015 to October 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. From December 2015 through February 2016, we administered a web- based survey to a nationally representative sample of banks and credit unions. The survey included questions on (1) the implementation of reserve requirements; (2) Regulation D monitoring and compliance; (3) actions banks and credit unions take when the transaction limit is reached; and (4) impact on banks’ and credit unions’ operations and customers. All survey results presented in this appendix are generalizable to the population of depository institutions, except where otherwise noted. We received valid responses from 71 percent of our sample. Because our estimates are from a generalizable sample, we express our confidence in the precision of our particular estimates as 95 percent confidence intervals. The questions we asked in our survey of depository institutions are shown below. Our survey was comprised of closed- and open-ended questions. In this appendix, we include all survey questions and aggregate results of responses to the closed-ended questions; we do not provide information on responses provided to the open-ended questions. For the purposes of the survey, we use nontransaction account(s) to refer to deposits that are not transaction accounts For a more detailed discussion of our survey methodology, see appendix I. Tables 15-19 present estimates for question 5: If a customer opens an account in person or online, how do you inform him/her of the Regulation D six-transaction limit? Table 20-25 present estimates for question 6: If a customer opens an account via other means (telephone, email, etc.), how do you inform him/her of the Regulation D six-transaction limit? Tables 28-34 present estimates for question 8A: Which of the following methods do you use to notify customers? Before the establishment of the Federal Reserve System (Federal Reserve), reserve requirements, imposed by other laws, were used to ensure the liquidity of bank notes (negotiable instruments issued by depository institutions that could be redeemed for gold or silver), which were the primary medium of exchange in the mid- to late-1800s. To facilitate more widespread use of bank notes, the National Bank Act of 1863 allowed depository institutions to organize under a national charter and created a network of institutions to easily circulate their bank notes across the country. In exchange for a charter that promoted widespread use of their notes, nationally chartered institutions were required to hold 25 percent reserves against their notes and customer deposits. The role of reserve requirements continued to change after the Federal Reserve was created in1913 with the passage of the Federal Reserve Act (see fig. 4). When the Federal Reserve was created, reserve requirements did not have a stated role in influencing the availability and cost of money and credit. In the years before the Federal Reserve Act, a series of bank runs and financial panics made evident the need for a mechanism to accommodate temporary variations in the public’s demand for cash. Accordingly, the Federal Reserve Act created a system of Reserve Banks to act as lenders of last resort and thereby provide temporary liquidity relief to the nationwide banking system during times of financial crisis. Among other provisions, the Federal Reserve Act sought to establish more effective supervision of banking, and the Federal Reserve was given the responsibility of supervising state-chartered and nationally-chartered depository institutions that chose to be members of the Federal Reserve. All member institutions were subject to reserve requirements. Beginning in the 1920s, reserve requirements gradually became important for implementing monetary policy as the Federal Reserve moved toward a more proactive role in influencing credit conditions. As borrowing increased rapidly in the 1920s, the Federal Reserve determined that reserve requirements could be used to constrain the expansion of credit by requiring reserves against deposits used to fund loans. However, this objective was complicated in practice because reserve requirement ratios were established in the Federal Reserve Act and not by the Board of Governors of the Federal Reserve System (Board of Governors) and by the reliance on the discount rate as the primary tool for influencing the availability and cost of money and credit at the time. The discount window rate was set below market, which provided depository institutions the incentive to borrow from the Reserve Banks to finance operations. This meant reserve requirements placed no significant constraint on lending. In addition, the Federal Reserve did not have the authority to raise reserve requirements at the time to make them a more binding constraint on credit expansion. By 1931, the Federal Reserve moved from using reserve requirements as a source of liquidity for deposits held at depository institutions to using reserve requirements to proactively affect the cost and availability of money and credit. The Banking Act of 1935 gave the Board of Governors authority to increase reserve requirements and the Board of Governors doubled the required reserve ratios on demand deposits and time deposits. During World War II, the emphasis on proactive monetary policy was temporarily superseded by a focus on helping fund the war through “pegging” interest rates and purchasing Treasury securities (that is, finance government debt through open market operations). In 1951, the Federal Reserve returned to proactive monetary policy, focusing on financial conditions in short-term money and credit markets. From this period to 1980, reserve requirements for member institutions were based on geography and were adjusted numerous times. Reserve requirements were adjusted to reinforce or supplement the effects of open market operations and discount policy on credit conditions as well as in response to financial innovation that created new sources of funding operations to circumvent reserve requirements on deposits. For example, the Board of Governors imposed marginal reserve requirements— additional requirements on each new increment of deposits—on large time deposits and Eurocurrency liabilities (net balances of depository institutions organized in the United States but with non-U. S. offices and international banking facilities). Reserve requirement computation methods also changed in the 1960s and 1970s. In 1968, the Board of Governors adopted a system of lagged reserve requirements. Under this system, an institution’s required reserves were computed based on its deposit levels from the preceding 2 weeks, replacing contemporaneous computation occurring during a reserve maintenance period. Contemporaneous computation provides for a real-time link between reserve requirements and M1 (the sum of currency held by the public plus transaction deposits of depository institutions). Four years later in 1972, the Board of Governors adopted a graduated reserve requirements schedule—varying reserve requirements depending on deposit levels regardless of geographic location. Because reserve requirements were imposed only on the liabilities of member institutions, some state-chartered institutions that chose to be members began to leave the Federal Reserve. The graduated reserve requirements were intended to reduce reserve requirements for smaller banks, which were more likely to leave the Federal Reserve, but it further weakened the link between M1 and aggregate reserve balances. In response, the Federal Reserve adopted a quantity operating procedure (targeting the amount of reserve balances in the banking system overall through open market operations) designed to maintain close, short-run control of M1 in 1979. According to the Federal Reserve, the ability to control M1 depended on the strength and stability of the link between reserves at member banks and the level of M1 deposits in the entire banking system—a link that was being weakened by the decline in Federal Reserve membership and a complicated system for determining reserve requirements. The Federal Reserve feared that continuing declines in Federal Reserve membership would diminish and undermine the effectiveness of monetary policy under a quantity operating procedure. In response to declining Federal Reserve membership and to strengthen the Federal Reserve’s control over the implementation of monetary policy, Congress passed the Depository Institutions Deregulation and Monetary Control Act of 1980 (Monetary Control Act). The Monetary Control Act amended the Federal Reserve Act to require “all depository institutions,” not just member banks, to satisfy reserve requirements, thereby increasing the Federal Reserve’s ability to influence money market conditions. It also simplified the graduated reserve requirement schedule. Furthermore, the Monetary Control Act initially set a basic reserve requirement ratio of 3 percent on transaction accounts below a specified level (the “low reserve tranche”) and specified a reserve requirement ratio of 12 percent on all transaction accounts over that amount and provided that the Board of Governors could adjust the latter ratio between 8 percent and 14 percent. The Monetary Control Act also imposed reserve requirements on two other types of liabilities: nonpersonal time deposits, and Eurocurrency liabilities. This effectively broadened the reserves base and allowed the Federal Reserve to improve control over the supply of reserves. The Board of Governors amended Regulation D in 1980 to implement the reserve requirements provisions of the Monetary Control Act. Two years later, Congress passed the Garn-St Germain Depository Institutions Act of 1982 (Garn-St Germain Act), which amended the Federal Reserve Act to provide that transaction accounts at depository institutions below a certain level, known as the “exemption amount” were to be subject to a reserve requirement ratio of zero percent. This amendment provided for the exemption amount to be adjusted each year by a statutory formula that takes into account the percentage increase or decrease in all reservable liabilities at all depository institutions over the previous year. Roughly beginning in 1984, the Federal Reserve shifted from targeting M1 towards targeting the cost of reserves (federal funds rate) and then adjusting the supply of balances in the banking system relative to the demand for balances to achieve the target federal funds rate. That same year, the Board of Governors moved from a system of lagged reserve requirements (reserve requirements computation based on institutions’ deposit levels from the preceding 2 weeks) to a system of contemporaneous reserve requirements (reserve requirements computed during the reserves maintenance period). Although the focus was not on M1, this change tightened the link between reserves and M1, allowing the Federal Reserve to achieve a stable, predictable demand for reserves and estimate the supply of reserves needed to achieve the target federal funds rate. In 1998, the Board of Governors returned to a system of lagged reserve requirements, which is still in place. The Federal Reserve Act as originally enacted neither explicitly authorized the payment of interest on reserve balances maintained at Reserve Banks nor specifically prohibited the payment of interest. However, Congress passed the Financial Services Regulatory Relief Act of 2006, which amended the Federal Reserve Act to provide specific authorization for balances at Reserve Banks to receive earnings (interest on reserves). This effectively broadened the set of monetary policy tools available to the Federal Reserve. Central banks have generally transitioned from reserve operating procedures where reserve requirements are critical to achieving monetary policy objectives. Although not necessarily generalizeable to the United States, the experiences of central banks in several countries suggest that it is possible to achieve monetary policy objectives in an environment of zero reserve requirements—or when these requirements are nonbinding. The common element in each of these cases is the use of interest rate operating procedures, where monetary policy tools are used to contain policy targets within a specific range. The primary instruments that enable these operating procedures are mechanisms to influence the demand for central bank balances, such as the payment of interest on reserves and lending facilities similar to the discount window. Approaches that use these tools to establish upper and lower limits on a policy rate (e.g., the federal funds rate) are referred to as “channel” or “corridor” operating frameworks. In general, the targeted interest rate is bounded within a “corridor” by a rate of interest on central bank lending at the top and a rate of interest on deposits at the central bank at the bottom to limit fluctuation in that targeted rate. The payment of interest on reserves is critical to successful implementation of monetary policy in an operating framework where required reserves are set to zero or are nonexistent. Useful lessons about the implementation of monetary policy in a zero reserve requirement environment come from the experiences of countries that have elected to eliminate or significantly reduce reserve requirements. Many of these approaches have been employed for a number of years. For example, Canada phased out reserve requirements by 1994 while the Bank of England did so by 1981. While there are consequences for reducing or eliminating reserve requirements on the implementation of monetary policy, including potential short-term interest rate volatility and the cost associated with other frameworks, the corridor operating frameworks in these and other counties also provide examples of dealing with those consequences without reliance on the required maintenance of reserves. However, the ability to extend the experiences of other countries to the United States is unclear given a number of differences, including the size of the banking systems and type of institutions operating in the financial system. For example, the Canadian financial system consists of roughly 80 banks according to the IMF compared to several thousand banks in the United States. Corridor operating systems without reserve requirements can vary by the key features and structural elements that reflect each nation’s unique institutional and financial market structure as well as key decisions about tradeoffs and preferences. The institutional details of the corridor system also can vary over time at a given central bank. The global financial crisis resulted in a number of nonconventional monetary policy measures by central banks and deviations from the operational frameworks in place pre-crisis. Therefore, while not necessarily reflective of the exact operational procedures currently in place at the given central bank, some examples include the following: Bank of Canada and Reserve Bank of Australia. Before the global financial crisis, these central banks implemented monetary policy using a simple (or “symmetric”) corridor framework where the target interest rate is bounded by central bank deposit and lending facilities—and the aim is to keep short-term rates in the center of the corridor. In the absence of reserve requirements, central bank balances are largely composed of payment and settlement balances and balances held for precautionary purposes. Payment and settlement balances, which are generally less predictable and stable than required reserves, are also not as sensitive to short-term interest rates. As a result, to help keep short-term rates from fluctuating undesirably, the Bank of Canada and Bank of Australia operated relatively narrow corridors and made frequent use of open market operations to manage the overnight rate, among other things. In 2009, the Bank of Canada began temporarily operating what could be characterized as an “asymmetric” corridor or “floor system” (see Reserve Bank of New Zealand below). Bank of England. Before 2009, the United Kingdom implemented a monetary policy approach based on a system of voluntary reserves. When active, banks must establish a reserve target and maintain that target to earn interest and are penalized for holding an amount outside the target range. Banks can ensure the target is reached by using the Bank of England’s standing lending and borrowing facilities. These standing facilities form a ceiling and floor (interest rate corridor) around the policy rate. Open market operations were conducted several times during the day to meet the demand for central bank balances and supply appropriate levels of reserves to meet banks’ target level of reserves. The Bank of England suspended voluntary reserves targets in March 2009 and moved to an asymmetric corridor system in which the targeted rate is close to the Bank of England lending rate. Reserve Bank of New Zealand. In 1999, the Reserve Bank of New Zealand began implementing monetary policy through a symmetric corridor system, relying on standing lending and deposit facilities, and open market operations. As in the systems described above, payment and clearance balances factor heavily in the conduct of monetary policy. In response to concerns about the effectiveness of the corridor framework, the central bank began substantially increasing the volume of central bank balances in the system in 2006. This approach of providing abundant balances (reserves) serves to reduce interest rate volatility within the corridor. It is important to note that there are very few financial institutions operating in New Zealand and the majority of these are foreign owned. The Federal Reserve currently operates a monetary policy approach that is similar to the type of framework it could employ if it were to significantly reduce or eliminate reserve requirements. However, the Federal Reserve revised its operational framework for monetary policy implementation to enhance its control over the federal funds rate given the amount of excess reserves in the system. The high volume of reserves is a consequence of actions taken by the Federal Reserve to address the recent global financial crisis. As figure 7 shows, balances maintained to satisfy reserve requirements are so low relative to the total balances in the banking system that reserve requirements are considered “nonbinding” on the behavior of depository institutions. As of March 2016, balances maintained to satisfy reserve requirements totaled $152 billion while total reserve balances totaled $2.52 trillion—indicating that excess balances are more than $2 trillion. Under these conditions, reserve requirements do not play the typical facilitating role in the implementation of monetary policy in the United States. More importantly, because of the size of its balance sheet, the Federal Reserve had difficulty influencing the federal funds rate through the typical shifts in the supply of reserves, which warranted a revision in its operational framework for implementing monetary policy. To retain flexibility in its treatment of assets purchased in response to the financial crisis, since 2008, the Federal Reserve has been influencing the federal funds rate using tools that include the payment of interest of reserves (which also would be necessary to implement monetary policy in an operating framework with reserve requirements set to zero). The Federal Reserve’s current approach is anchored by the rate of interest it pays on reserves and the interest rate on the overnight reverse repurchase agreement (RRP) facility that is also accessible to nondepository financial institutions. A key feature of this operating procedure is that it allows the Federal Reserve to significantly increase the supply of reserves while keeping the short-term interest rate close to its target. While the Federal Reserve uses key elements of the corridor operating approaches used by foreign central banks, it has not formally adopted a corridor system. Nevertheless, the U.S and foreign experiences illustrate that monetary policy can be implemented in an environment in which reserve requirements are not binding (due to low or zero reserve requirements or abundant excess reserves). However, the operational framework in the United States has not been tested in an environment of scarce reserves. In such an environment, a number of technical, operational, and practical issues would need to be addressed. GAO offers no policy conclusions on the appropriate approach for the United States and this presentation should not be interpreted as a judgement on how monetary policy should be conducted. In addition to the contact listed above, Karen Tremba (Assistant Director), Vida Awumey (Analyst-in-Charge), and Abigail Brown made major contributions to this report. Also contributing to this report were Carl Barden, Bethany Benitez, Rudy Chatlos, Andrew Furillo, Farrah Graham, John Karikari, Jill Lacey, Kristeen McLain, Roberto Pinero, Barbara Roesmann, and Jena Sinkfield.
Section 19 of the Federal Reserve Act requires depository institutions to maintain reserves against a portion of their transaction accounts solely for the implementation of monetary policy. Regulation D implements section 19, and it also requires institutions to limit certain kinds of transfers and withdrawals from savings deposits to not more than six per month or statement cycle if they wish to avoid having to maintain reserves against these accounts. The transaction limit allows the Federal Reserve to distinguish between transaction accounts and savings deposits for reserves purposes. GAO was asked to review certain effects of Regulation D. This report's objectives include examining depository institutions' implementation of Regulation D's requirements, the effect of the transaction limit on their customers, and central banks' varying dependence on reserve requirements and the monetary policy implications. To examine these issues, GAO conducted a generalizable survey of 892 depository institutions (with a response rate of 71 percent); analyzed consumer complaint data from federal financial regulators; reviewed federal statutes and regulations, Federal Reserve System publications, and academic literature; and interviewed regulatory agency officials, representatives from banking and credit union associations, and depository institutions selected based on institution type and size. The Federal Reserve and other federal banking regulators provided technical comments on a draft of this report, which we incorporated as appropriate. The methods by which depository institutions can implement Regulation D (Reserve Requirements of Depository Institutions) include maintaining reserves against transaction accounts and enforcing a numeric transfer and withdrawal (transaction) limit for savings deposits if they wish to avoid classifying those accounts as reservable transaction accounts. GAO estimates that 70–78 percent of depository institutions limit savings deposit transactions. Other methods include automatically transferring balances from transaction (e.g., checking) accounts to savings deposits in order to reduce reserve requirements. Institutions may choose to maintain transaction account reserves against savings deposits to eliminate the need to enforce the transaction limit. But some institutions GAO surveyed indicated that they had operational burdens associated with monitoring and enforcing the transaction limit (for example, 63–73 percent cited challenges, such as creating forms and converting and closing accounts). Available data indicate that few customers exceeded or expressed concerns about the limit. Monetary policy—actions taken to influence the availability and cost of money and credit (i.e., interest rates)—can be conducted with varying dependence on reserve requirements. While many central banks around the world use reserve requirements, some have reduced their reliance on them due, in part, to the associated cost and administrative burdens. GAO reviewed how different central banks rely on reserve requirements and found a wide range of frameworks, including those with: (1) different mandatory reserve requirements (as compared to the United States), (2) voluntary reserve requirements, and (3) no reserve requirements at all. For example, countries with different mandatory reserves frameworks require maintaining reserves against all deposits, which eliminate the need to impose limits on transfers and withdrawals from specific accounts. While the Board of Governors of the Federal Reserve System (Federal Reserve) has used reserve requirements to help achieve the interest rate targets it sets in the market for reserves (federal funds market), central banks of other developed countries such as Canada, Australia, Sweden, and Denmark, among others, do not rely on reserve requirements. Instead, they use interest rates under their direct control to restrict interest rates from moving outside of a targeted range (corridor operating approach). The authority for the Federal Reserve to pay interest on reserves has reduced some of the costs associated with reserve requirements in the United States. One of the alternatives to the current reserve requirement framework that GAO examined would require legislative change to further reduce some of these costs and burdens. Other approaches, while proven feasible for some foreign central banks, have implications for the conduct of monetary policy (e.g., require the pursuit of a corridor operating approach). Given the differences in financial systems across the globe, it is unclear whether the practices used by other nations would translate to the United States. Moreover, lowering or eliminating reserve requirements would raise a number of operational and technical issues for monetary policy implementation. For example, lowering or eliminating reserve requirements could introduce the need to manage potential volatility in short-term interest rates. Therefore, minimizing the burdens associated with reserve requirements would have to be weighed against the costs and monetary policy implications of any alternative framework when considering changes.
Enacted on January 23, 1995, the CAA as amended, applies 12 federal civil rights, workplace, and labor laws to legislative branch employees who were previously exempted from such coverage. By passing the CAA, the Congress extended to approximately 30,000 employees of the legislative branch certain fair employment and occupational safety safeguards. The CAA applies to current employees, applicants for employment, and former employees of the following organizations: Congressional Budget Office, Office of the Attending Physician, Office of the Architect of the Capitol, and Office of Compliance. The CAA did not include GAO, the Library of Congress (LOC), and the Government Printing Office (GPO) in many of its provisions because the employees at these organizations already enjoyed the protections of many of the civil rights laws extended to legislative branch staff by the CAA prior to its enactment. For example, GAO, LOC, and GPO employees were already protected against discrimination based on race, color, religion, sex, and national origin (42 U.S.C. § 2000e-16); discrimination based on age (29 U.S.C. § 633a); and discrimination based on disability (42 U.S.C. § 12209). In addition, GAO, LOC, and GPO employees already enjoyed the protections provided by the Fair Labor Standards Act (29 U.S.C. § 203) and by the Federal Service Labor-Management Relations Act (5 U.S.C. § 7103 for GPO and LOC employees; 31 U.S.C. § 732(e) for GAO employees). Furthermore, all three organizations have individualized processes for resolving employee disputes. For example, GAO uses an independent entity, the Personnel Appeals Board, to adjudicate employment disputes involving GAO employees. The CAA does extend the protections of the Employee Polygraph Protection Act, the Worker Adjustment and Retraining Notification Act, the Uniform Services Employment and Reemployment Rights Act, the Family and Medical Leave Act, the public access provisions of the Americans with Disabilities Act (ADA), and the Occupational Safety and Health Act to GAO and LOC employees. OOC’s duties are divided among a Board of Directors, an Executive Director, and a General Counsel, as shown in figure 1. The five-member Board of Directors has the duty of administering appeals for the CAA’s dispute resolution process. Employees or employers covered by the CAA who are dissatisfied with the final decision resulting from a dispute resolution process hearing may request that the Board review the decision. From 1996 through 2003, the Board has heard 20 appeal cases. The Board is also responsible for appeals of decisions by hearing officers with respect to complaints filed by the General Counsel regarding occupational safety and health issues, disability access concerns, and labor-management relations violations. The CAA also assigns the Board the duties of developing and issuing regulations to implement the rights and protections of employees for 9 of the 12 laws included in the CAA. The Board has issued regulations, which were approved by the Congress, for the Family and Medical Leave Act, federal labor-management relations provisions found in Chapter 71 of title 5 U.S. Code, the Fair Labor Standards Act, and Worker Adjustment and Re- Training Act. The CAA also provides that OOC may apply existing regulations promulgated by executive branch agencies for regulations not issued by the Board, except for regulations regarding labor-management relations. Before the Board’s adopted regulations can become effective, they must first be placed in the Congressional Record for a comment period and must subsequently be approved by the Congress. The Board has delegated much of the work to complete these duties to OOC’s Executive Director. The Executive Director has overall responsibility for managing OOC’s education and dispute resolution processes as well as directing OOC’s staffing and budgeting functions. Reporting directly to the Executive Director are two deputies, to whom the Executive Director has delegated specific functional roles in addition to those identified in the CAA: the Deputy Executive Director for the House is responsible for managing OOC’s education and information distribution functions, and the Deputy Executive Director for the Senate is responsible for administering OOC’s dispute resolution process. The OOC General Counsel’s duties include investigation and enforcement of the Occupational Safety and Health Act and ADA requirements and managing labor-management relations unfair labor practice case processing and court litigation. Assisting the General Counsel is an attorney and an inspector detailed from the Department of Labor to investigate and enforce occupational safety and health standards with the assistance of part-time contractors on a limited basis. In summary, OOC’s organizational structure has a leadership hierarchy with different top leadership functions shared among the Board of Directors, Executive Director, and General Counsel. This organizational structure of shared functions is largely due to statutory requirements that OOC carry out a variety of different roles—including adjudication, education, and enforcement—as it applies the 12 workplace laws covered by the CAA. In order to provide for a degree of needed independence between these different functions, the CAA established an organizational structure that, among other things, gave the Board the responsibility for hearing appeals of the dispute resolution process for cases that are initially within the province of the Executive Director or General Counsel. While the CAA gives the Board of Directors the authority to appoint and remove OOC’s four senior executives—the Executive Director, General Counsel, and two Deputy Executive Directors—the Board does not play an active role in the daily operational management of OOC. Instead, OOC’s part-time Board focuses on its adjudicatory and policy functions including hearing appeals and issuing regulations. Although the CAA designates OOC’s Executive Director as the organization’s chief operating officer, the law provides the General Counsel with independent authority to investigate and enforce matters concerning occupational safety and health, disability access, and labor-management relations. In practice, this has resulted in a division of OOC by function, with the Executive Director responsible for hiring and managing staff to carry out the education and dispute resolution process functions and the General Counsel responsible for hiring attorneys and managing staff in his operational areas. OOC is staffed by 15 employees, including 4 in statutorily appointed positions. As figure 2 shows, OOC’s annual expenditures have ranged from a high of $2.15 million in fiscal year 1997 to a low of $1.80 million in fiscal year 2001. Fiscal year 2003 expenditures were $2.02 million. In general, these expenditures are allocated between the duties performed by the Executive Director, General Counsel, and Board of Directors. Over the past 7 fiscal years, functions that are the responsibility of the Executive Director have accounted for most of OOC’s expenditures. Since August 2003, the OOC Board, its senior leadership team, and OOC’s employees have been exerting a concerted effort—consistent with our suggestions—to more fully define the fundamental results, or outcomes, that OOC seeks to achieve. OOC’s operations and reporting have traditionally been activity or output focused (e.g., number of requests for mediation received, time within phases of the process, and number of occupational safety and health inspections conducted). Such information is important to managing OOC and to ensuring that its scarce resources are efficiently targeted. However, the current Board and OOC leadership have undertaken OOC’s first strategic planning initiative in recognition that despite the real value from output information, such data do not address the more fundamental question of the effectiveness of OOC’s efforts. That is, OOC’s current planning effort is intended to help OOC and its congressional and other stakeholders ensure that OOC’s activities and outputs are optimizing the Office’s contribution to results, such as a safer and healthier workplace and one free from discrimination and other forms of conflict. Our discussions with OOC stakeholders across the Congress and legislative branch agencies confirmed the need for and importance of the current planning effort. Effective management control requires that an organization establish its organizational objectives in the form of a set of defined mission, goals, and objectives. Furthermore, we found that leading organizations consistently strive to ensure that their day-to-day activities support their organizational missions and move them closer to accomplishing their strategic goals. Thus, OOC is not alone among organizations in seeking to answer critical questions about its overall effectiveness. Our assessments over the last decade of executive agencies’ implementation of the Government Performance and Results Act (GPRA) have consistently found that executive agencies have struggled with shifting the focus of their management and accountability from outputs to results. At OOC’s invitation, we have met with OOC’s leadership to share our wealth of information and perspective on executive agencies’ efforts under GPRA, as well as our own experiences in strategic planning, performance planning, and accountability reporting at GAO. While maintaining our respective institutional independence, we are prepared to offer OOC continuing support as its planning efforts proceed. OOC’s planning initiative is important to ensuring that OOC’s programs, activities, and limited resources are contributing to results that are making improvements in the work and work environments of legislative branch employees. OOC’s efforts to achieve this goal are complicated by the inevitable tension that arises between organizations charged with the duty to implement and enforce regulations and the agencies subject to those regulations. Under the current draft of its strategic plan, OOC defines its mission as working to “advance safety, health, and workplace rights for employees and employers of the Legislative Branch as mandated by the Congressional Accountability Act.” OOC is developing strategic objectives (goals) in three areas: effectively enforce and administer the CAA; educate, collaborate, and facilitate the regulated community; and maintain an efficient and accountable workplace (within OOC). More specifically: Effectively enforce and administer the CAA: Regulatory enforcement and administration focuses on operation of dispute resolution procedures and investigation and prosecution of alleged violations. Educate, collaborate, and facilitate for our regulated community: The Office will encourage and facilitate positive change in the employment cultures within the regulated community to stimulate compliance with the entire CAA; and effectively communicate with the Congress regarding the status quo and potential enhancement of the CAA. Maintain an efficient and accountable workplace: Efficiency involves not only careful and wise use of appropriated funds, but also continued utilization of resources in a way that allows for timely and expeditious completion of office activities and functions in order to better serve our regulated community. OOC’s effort to develop a results-oriented strategic plan is an important and positive development that is still very much a work in progress—as the Board and OOC’s senior leadership clearly appreciate. Perhaps most important is OOC’s recognition that its planning effort provides a vehicle for engaging and consulting with key congressional and other stakeholders on the fundamental purposes of OOC (strategic goals), how those purposes will be achieved (programs and strategies), how progress will be assessed (performance measures), and what progress is being made and improvement opportunities exist (accountability reporting). More specifically, as OOC’s draft plan and our discussions with the Board and OOC’s leadership have confirmed, OOC is committed to an approach that meets its statutory responsibility by adopting a more cooperative and collegial approach with legislative branch offices and agencies, while at the same time maintaining its enforcement capability and safeguarding its institutional independence. The planning effort underway provides the opportunity to reach agreement with key congressional and other stakeholders on the direction—and potential limits—of this new commitment. OOC has held a series of discussions with selected congressional stakeholders and plans additional outreach with them and other stakeholders as the planning effort moves forward. In fact, as we have found by looking at leading results-oriented organizations, the production of the actual strategic planning document is one of the least important parts of the planning process. Leading results-oriented organizations understand that strategic planning is not a static or occasional event, but rather a dynamic and inclusive process. By working with and actively engaging key congressional and other stakeholders in its planning effort, OOC can better justify to stakeholders its current budget and staff resources allocation and program efforts then, as appropriate, OOC can build a business case for additional resources and new initiatives that OOC leadership may believe are necessary for an agreed-upon mission and set of strategic goals. In short, if done well, strategic planning is continuous and provides the basis for everything that the organization does each day. Moving forward, OOC plans to align key programs and strategies with each of these objectives. In that regard, OOC managers have begun drafting several work plans intended to link OOC’s programs and activities to the strategic objectives contained in the draft plan. Similar to the strategic plan, these work plans are still in draft and therefore do not yet provide a clear linkage between OOC’s strategic objectives and the day-to-day operations of these functions. OOC’s strategic planning also needs to include the development of results- oriented performance measures. OOC is committed to this effort and has “place-markers” in its draft plan for these measures. OOC could benefit from considering the experiences of leading organization in results- oriented performance measurement. Results-oriented organizations we have studied, which were successful in measuring their performance, developed measures that were: tied to program goals and demonstrated the degree to which the desired limited to the vital few that were considered essential to producing data responsive to multiple priorities, and responsibility-linked to establish accountability for results. Similar to decisions about strategic goals, determining the appropriate set of performance measures should also be based on input from key stakeholders to determine what is important to them to determine OOC’s progress and assess its performance. Put most directly, agreed-upon performance measures are the key to providing the Congress with the data it needs to answer a key question that the current fiscal environment is demanding of all agencies across the federal government: “What are we getting for our investment in this agency, and is it worth it?” OOC officials said that they are committed to making better use of IT in the future, and to ensuring that doing so is accomplished in a prudent and systematic fashion. For example, OOC’s draft strategic plan cites “fully leveraging IT to complement and expand office activities” as one strategy under its “maintain an efficient and accountable workplace” goal. OOC has begun to take some action, but much remains to be accomplished. For example, it established an Information Technology Task Force in May 2003, and consistent with our suggestions to OOC leadership, the task force has been charged with developing parallel IT strategies: one addressing near-term, stay-in-business IT needs and the other addressing long-term IT modernization needs. Thus far, the task force has met numerous times and has been guided in this initiative by a private consultant. It has also, for example, reviewed the current IT environment and has surveyed OOC staff about IT needs and preferences. With respect to near-term needs, OOC is taking steps to address immediate shortfalls in its ability to produce the information it needs to manage current operations and workloads. For example, OOC is investing a few weeks of staff time to create a new Access database to provide a temporary solution to meet certain case-tracking information needs. In our view, such relatively small, low-risk investments that provide immediate mission value are appropriate near-term steps. However, before pursuing strategic, modernized system solutions, it is important that OOC first position itself for successfully doing so by establishing certain basic IT management capabilities. These capabilities include, among other things, developing a picture or description, based on OOC’s strategic plan, of what it wants its future IT environment to look like, putting in place and following defined and disciplined processes for allocating limited resources across competing IT investment options, employing explicit and rigorous IT system acquisition management processes, and ensuring that needed IT human capital knowledge and skill needs and shortfalls are identified and systematically addressed. According to OOC officials, each of these areas will be addressed. If they are not, the risk of being unable to effectively leverage technology in achieving strategic mission goals and outcomes will be increased. OOC’s current use of IT is limited. For example, OOC’s automated administrative dispute resolution case tracking system does not have the capability to notify system users when a case closes or should be closed. OOC’s system manager said they must periodically review some cases manually and update the system for closed cases. OOC officials told us that they had experienced some data quality problems during early implementation stages of the dispute resolution case tracking system, but had tested the system in March 2003 to determine if corrective actions were effective. According to OOC officials, the data accuracy test demonstrated that the information was now reliable, although they said the test was performed informally and they had no documentation on the methodology and the test results. We performed our own independent test of data quality for this system as part of this review and found that the data were sufficiently reliable for the purposes of this report. (See app. I for additional information on our reliability and validity reviews of OOC’s database.) OOC also recognizes the need to make better use of IT when enforcing the Occupational Safety and Health Act-related provisions of the CAA. For example, OOC’s Office of the General Counsel is considering the purchase of specialized IT software that would centralize and automate a variety of tasks concerning the occupational safety and health-related cases it handles, including assessing risk, monitoring case status, and tracking agency abatement efforts. In November 2003, after meeting with us to discuss best practices in IT acquisition and planning, OOC’s General Counsel established a group to develop specific selection criteria to assess potential IT case management software. As part of this process, the group and an outside IT consultant gathered information from both legislative and executive branch agencies including the Architect of the Capitol (AOC), LOC, and the Occupational Safety and Health Administration (OSHA) concerning their practices and experiences with similar IT applications. OOC expects to complete this evaluation process by the end of March 2004. In regards to the accounting and budgeting system used by OOC, a 2003 audit by an independent accounting firm of LOC’s financial statements found that the system was reliable. LOC administers the accounting and budgeting system and the accounting firm’s findings were addressed to LOC. Although the auditors reported the system was, overall, reliable, they also reported that there were two IT-related deficiencies that could adversely affect the user’s ability to meet its financial management objectives. The deficiencies were that (1) security practices over IT systems need to be improved and (2) LOC needs to establish a comprehensive disaster recovery program to maintain service continuity, minimize the risk of unplanned interruptions, and recover critical operations should interruptions occur. The audit recommended that the LOC address these deficiencies as a high priority. LOC officials acknowledged the need to address these deficiencies and have taken some preliminary actions including drafting an officewide policy on IT security practices and acquiring an off-site facility for their disaster recovery program. As required by section 301(h) of the CAA, OOC issues an annual report that contains “statistics on the use of the Office by covered employees, including the number and type of contacts made with the Office, on the reason for such contacts, on the number of covered employees who initiated proceedings with the Office under this Act and the result of such proceedings, and on the number of covered employees who filed a complaint, the basis for the complaint, and the action taken on the complaint.” Based on our reviews of the reports issued thus far, OOC is meeting this annual report requirement. However, the information is almost entirely output based, providing little sense of OOC’s broader impact. Most of OOC’s congressional stakeholders with whom we spoke were not familiar with OOC’s annual reports and those congressional stakeholders who had seen the report said that it was difficult to understand and could be more user-friendly. For example, one congressional stakeholder said that it was difficult to make decisions about OOC using the information contained in the annual report. As an outgrowth of its strategic planning effort to identify, measure, and manage toward results, OOC can enhance its annual report and incorporate elements that would make it a more useful and relied-upon accountability report. New, results-oriented information, showing the extent to which goals were met and suggesting improvement opportunities—including those that may suggest the need for congressional concurrence or actions—could be reported along with the activity and workload statistics required in section 301(h) of the CAA. Building on the strategic planning efforts underway, we recommend that the Board of Directors, Executive Director, and General Counsel of OOC ensure that the planning effort: Is developed with extensive collaboration and input from key congressional and agency stakeholders to ensure that there is a reasonable and appropriate degree of agreement concerning OOC’s overall direction and that its programs are effectively coordinated with other efforts. To be most effective, this stakeholder and agency input should be part of an ongoing dialogue to ensure goals, objectives, and strategies are adjusted as warranted. Includes performance measures that are linked to the strategic plan and resulting annual work plans. Becomes the basis for OOC’s budget and staff requests and developing and implementing program efforts and for assessing the contributions of those efforts to results. Makes information technology planning and implementation an integral component of the process. Is used as a basis for an augmented and more results-oriented annual report that provides data on the degree to which key goals are being achieved, in addition to meeting important statutory reporting requirements. Our work looking at leading organizations has often found that as organizations shift their orientation from outputs and activities to the results that those outputs and activities are intended to achieve, new, different, and more effective ways of doing business will emerge. OOC is in the midst of just such a shift in its orientation as was discussed earlier in this report. This shift entails putting in place a program structure at OOC that meets its statutory responsibilities, contributes to improvement in the working environment and workplaces of legislative branch employees, and safeguards OOC’s independence. The CAA contains a series of specific requirements for OOC to meet as it carries out its responsibility to administer and enforce the CAA. Towards this end, OOC has taken a number of actions including establishing and administering a dispute resolution process for employees who allege violations of civil rights, labor, and employment laws covered by the CAA; conducting investigations and periodic inspections of legislative branch facilities to ensure compliance with safety, health, and disability access standards; adopting substantive regulations, many of which have been approved by the Congress, to apply covered laws to the legislative branch; educating both employees and employing offices about their rights and responsibilities under the law; and regularly reporting to the Congress on its activities in a variety of required reports and studies. Dispute resolution is the largest of the functions performed by OOC, available to any legislative branch employee in an agency covered under the CAA’s dispute resolution provisions who alleges violations of certain sections of the CAA. OOC’s dispute resolution process consists of a series of statutorily prescribed steps beginning with counseling and mediation. If these actions fail to resolve the dispute, the employee may choose to either file a formal complaint with OOC and proceed with an administrative hearing before an independent hearing officer, or file suit in federal court. Both employees and covered legislative agencies that are dissatisfied with the decisions of their administrative hearings may appeal to OOC’s Board of Directors. OOC told us that the large jumps in the number of cases in the dispute resolution system in 1999, 2000, and 2001, were the result of two single- issue large group requests involving employees from AOC and the United States Capitol Police (USCP) as they worked their way through the process. For example, 274 (or almost 70 percent) of the 395 requests for counseling reported in 2001 pertained to a single USCP large group case. Similarly, 272 of the cases that were closed that same year and were reported as resulting in civil actions being filed with the district court were also associated with this same single-issue group request. In fact, OOC cites the absence of such group cases in its annual report to the Congress on the use of the Office when describing the significant decrease in total requests for counseling received in 2002. These cases illustrate that OOC’s approach to reporting case activity does not provide a complete picture of OOC’s workload since it reports each individual included as part of the large group as an individual request regardless of whether the individual personally requested or even participated in services such as counseling or mediation. OOC could not provide us with a specific number of the individuals included in these group requests who actually received counseling or mediation services in 1999 or 2001. According to OOC, dispute resolution data is presented in this manner because the CAA specifically requires the Office to report on the total number of individuals requesting counseling or mediation. While this information is needed to meet existing statutory reporting requirements, other data, such as the actual number of counseling and mediation sessions held, could provide a helpful complement to the data which OOC currently reports, as well as a more useful indicator of its actual workload. In addition to rethinking how it might supplement the way it currently reports on single-issue large group requests, OOC should consider other potential improvements on how it measures its activities and workload. Such improvements can play an important role in helping organizations both obtain a fuller understanding of the process and implications of its activities as well as provide stakeholders with more transparent and complete information. OOC is exploring options in this area and we have offered our assistance in this regard. One possibility would be for OOC to benchmark its data against that reported by other federal agencies, when comparable measures exist, or against OOC’s own past performance when such measures do not. For example, following the practice of the U.S. Equal Employment Opportunity Commission (EEOC), OOC might consider using such benchmarks to set both long- and short-term performance targets. The CAA assigns OOC’s General Counsel with a number of independent investigative and enforcement functions related to ensuring occupational health and safety, disability access, and labor-management relations. Occupational safety and health. Enforcement of occupational safety and health standards accounts for the majority of the time spent by the General Counsel and his staff, and roughly 30 percent of OOC’s overall workload. The CAA requires the General Counsel to perform this work in two basic ways: (1) by conducting inspections and investigations of potential hazards in response to requests by any covered employee or employing office and (2) by performing periodic inspections of the facilities of all entities covered by the act. Any covered legislative branch employee can file a complaint requesting the General Counsel inspect and investigate a possible health and safety hazard. As shown in figure 4, the annual workload of requested inspections has risen dramatically since 2000, from 14 in 2000 to 39 in 2003 (as of December 9, 2003). The General Counsel’s resources, however, have not kept pace with this growth. The General Counsel’s financial expenditures increased by less than 5 percent from fiscal year 2000 to fiscal year 2003. In addition, during this same period the number of full-time staff assigned to conduct the actual workplace health and safety investigations remained steady at a single individual, an OSHA workplace safety specialist assigned to OOC from the Department of Labor on long-term detail, with the assistance of part-time contractors on a limited basis. OOC also enforces health and safety regulations by conducting periodic inspections of the facilities of all covered entities at least once each Congress, as required by the CAA. These inspections are scheduled ahead of time with legislative agencies and resemble the “walk around” inspections conducted by OSHA. In contrast to most other CAA requirements, OOC is not fully in compliance with the CAA requirement that it “conduct periodic inspections of all facilities” of the agencies covered by the provision. Although OOC conducted periodic inspections at the majority of facilities in the Washington, D.C. area including large structures such as all Senate and House office buildings and the U.S. Capitol building, OOC did not include 10 out of 46 facilities subject to its jurisdiction in its last biennial inspection in 2002. For example, according to documents provided by OOC, the Office did not perform safety and health inspections at the Senate or House page dormitories, or at LOC’s National Library Service for the Blind and Physically Handicapped. OOC officials told us that the decision not to inspect these facilities was largely due to resource constraints. Disability access. The CAA requires the General Counsel to both conduct investigations of charges alleging discrimination in public services and accommodations on the basis of disability, and to inspect covered facilities in the legislative branch at least once each Congress for compliance with the public services and accommodations provisions in the ADA. From 1997 through December 9, 2003, five charges of discrimination have been filed with the General Counsel. According to OOC documents, its biennial inspections have included all public areas where constituents, individuals on official business, and visitors have access—approximately 8 million square feet of space. Labor-management relations. Under the CAA, OOC’s General Counsel is responsible for investigating charges of unfair labor practices and for filing and prosecuting complaints in administrative hearings and before the Board of Directors as appropriate. Since 2001, the number of charges filed with the General Counsel has remained fairly constant, varying between 18 and 19 per year, as shown in figure 5. In 2000, OOC experienced a sharp spike in charges filed with the number jumping to 38 in that year. OOC was unable to provide a definitive reason for this increase, but told us that it was probably related to the large group cases going through OOC’s dispute resolution process as these cases raised issues that could relate to charges of unfair labor practices. Despite the relatively stable number of new unfair labor practice charges received by OOC since 2001, the backlog of cases still pending at the end of the year has almost doubled, increasing from 6 in 2001 to 11 in 2003 (as of December 9, 2003). OOC’s Executive Director is responsible for a variety of other functions relating to labor-management relations including supervising union elections and resolving issues involving such matters as good faith bargaining. OOC has supervised 14 union elections since 1997, 12 of these in 2000 or earlier. Another duty of the General Counsel under the CAA is to represent OOC in any judicial proceeding under the act, including cases before the U.S. Court of Appeals for the Federal Circuit. In past years, the number of such cases has been low, with no cases filed in 1997, 2001, and 2002 and one case each filed in 1998, 1999, and 2000. However, in 2003 the number of federal circuit review cases involving OOC and requiring attention from the General Counsel and his staff unexpectedly increased to six. OOC’s General Counsel told us that this situation has been exacerbated by the loss of two attorneys during 2003, reducing his previous staffing level of four full-time OOC attorneys to only two, including the General Counsel, by December 2003. According to the General Counsel, his office is currently recruiting for a staff attorney and he expects to have that individual hired by spring of 2004. OOC has a broad mandate under the CAA to provide education and information to the Congress, employing offices, and legislative branch employees about their rights, protections, and responsibilities under the act. To this end, OOC uses a variety of approaches including distributing written material such as home mailings, fliers, bulletins, fact sheets, and reports; conducting briefings with new staff at agency-sponsored orientation sessions; holding seminars at the offices of stakeholders and legislative agencies; maintaining a Web site and information number; and responding to direct inquires. More recently, OOC’s education efforts have made increased use of technology including a redesign of its Web site to make it more user friendly and increased use of resources such as electronic news bulletins and online tools such as its interactive template for creating an emergency action plan. Despite such potentially promising initiatives, OOC’s current approach to tracking and reporting on its education efforts—its Education Program Year End Report—remains firmly focused on counting products and activities rather than focusing on results. Other approaches such as conducting feedback surveys and focus groups could provide OOC with valuable mechanisms to increase its understanding of the actual level of awareness specific target populations have of its programs and activities. OOC can then use this knowledge to assess its effectiveness in actually communicating its message rather than simply its diligence in distributing documents. Developing appropriate performance measures has the potential to help OOC realize several significant benefits such as improving its ability to understand and track its process toward achieving goals, giving managers crucial information on which to base their organizational and management decisions, and creating powerful incentives to influence organizational and individual behavior. Of course, the benefits of such improvements will need to be balanced against real-world considerations, such as the cost and effort involved in gathering and analyzing performance data, and the burden such data collection may present for stakeholders and covered employees. We have previously reported that as agencies develop information systems and capacities for analysis and evaluation, they discover that having a better understanding of the facts and the relationship between their activities and desired outcomes provides them with a solid foundation for focusing their efforts and improving their performance. Facing an increasing workload and scarce resources, OOC’s leadership has begun the process of asking these questions in an effort to find more effective ways in which to do their work. OOC’s senior leadership has expressed a willingness to explore opportunities to develop the technical and analytical capacity needed to more effectively work toward results. For example, over the last several years OOC has used a variety of methods for tracking and recording summary information on its occupational safety and health related caseload in order to manage its work in this area. These systems were generally basic in design and ranged from several incompatibly designed databases to using simple tables in a word processor to keep track of cases. None of these approaches provided the General Counsel and his staff with an easy way to systematically examine the approximately 50 open requests for health and safety investigations, or the hundreds of inspections and investigations conducted by OOC in the past, to look for patterns and identify possible common or underlying causes of potential workplace hazards. Toward this end, OOC is exploring the possibility of acquiring a specialized regulatory case-tracking database application that would enable the General Counsel and his staff to take a more strategic and risk- based approach towards their work, including such decisions as assigning cases, determining the appropriate amount of follow-up required, and informing the selection of particular facilities for its biennial inspections. Until OOC decides on whether, and which, permanent case-tracking software system it may adopt, staff in the General Counsel’s office have been developing a new Microsoft Access database intended to offer a short- term solution to OOC’s need to more effectively track basic information related to its occupational safety and health related caseload, such as case type, key dates, principal parties involved, and actions planned and accomplished. According to the General Counsel, this system should be operational in January 2004. Ensuring that an organization is focusing on the right activities to effectively achieve its goals is always an important part of good management control. However, focusing on the right activities is especially important in times of economic scarcity, when the benefit of having programs that deliver a maximum impact towards achieving results is particularly critical. For example, in light of increasing demands for safety and health inspections, and the very small number of OOC staff available to conduct those inspections, OOC’s General Counsel and his staff have begun to explore possible approaches to leverage OOC’s limited resources through constructive engagement with legislative agencies. This approach seeks to obtain agency compliance with occupational safety and health requirements by motivating them to do so out of a sense of common purpose and mutual benefit rather than forcing them with the threat of punitive citations. Toward this end, in February 2004 OOC will sponsor its first-ever Organizational Health and Safety Program Conference. The conference will bring together congressional and agency staff involved in health and safety issues from throughout the legislative branch in order to learn about recent thinking and practice in workplace health and safety and to discuss issues of mutual concern. The conference will include presentations by outside safety experts from a variety of organizations from the legislative branch and elsewhere. Tentatively scheduled presenters include representatives from AOC, organized labor, OSHA, and the National Safety Council. The recent experiences of AOC provide another example of the potential benefits of sharing lessons learned concerning health and safety practices. AOC is undertaking a major effort to augment and make more strategic its approach to worker safety and health. This effort obviously has important implications for AOC and its employees. Equally important, by sharing AOC’s experiences and leveraging its efforts in such areas as incident reporting and follow-up and risk mitigation, AOC’s efforts can potentially have great value across legislative branch agencies. Building on this idea, OOC should explore the possibility of playing a more active role as a central repository for good practices developed by agencies throughout the legislative branch on topics covered under the CAA. OOC’s recent experience with USCP to offer fire safety training for its officers provides a lesson in the importance of carefully targeting such initiatives to the intended audience. The impact of OOC’s initiative may be somewhat limited until OOC develops a deeper understanding of what would make this type of supplemental training useful in the view of USCP’s management and officers. Specifically, a senior USCP official who was directly aware of this effort informed us that although the training contained some good material, it was not sufficiently tailored to USCP to be very useful. The value of thinking about outcomes and the relationship between activities and outcomes can also help OOC make determinations about whether it is providing the right mix of services and activities to achieve its overall goals. For example, until recently OOC’s education efforts have been largely focused on activities such as sending mass mailings of written material to the homes of legislative branch employees, distributing fliers and bulletins to legislative agencies for subsequent distribution, and conducting general information sessions for new employees. Moreover, OOC’s education efforts would benefit from better analysis of the types of complaints that have been made. As noted above, OOC tracks the number of cases it handles and how they proceed through established processes. It does not, however, currently gather data on the nature of complaints. While protecting the confidentiality of OOC’s case files must continue to be of primary concern, OOC could also seek ways to examine if common issues are being raised. Such information could prove very valuable in targeting OOC’s education efforts. In addition, OOC’s current leadership team recognizes the importance of looking for new ways to get information about the CAA and OOC out to employees. They are also experimenting with new approaches such as posting e-bulletins and other ways to use the Internet creatively. On the basis of our work, ample opportunities exist for continued progress in this area. For example, in our conversations with congressional stakeholders, representatives from the Senate Administrative Office Managers Group told us that they would welcome additional contact with OOC. These Senate office managers are senior staff responsible for working with Members to ensure the efficient and effective operation of each member’s personal office. As such, office managers are key clients of OOC. Office managers meet periodically to discuss issues of mutual interest and concern—an ideal opportunity for OOC outreach. However, representatives from the Senate Administrative Office Managers Group told us that they had not had contact with OOC for several years. OOC should consider whether there are similar groups that it might reach out to as part of its effort to establish consistent and ongoing relationships with its clients. We recommend that OOC’s Executive Director and General Counsel: identify potential improvements to how the Office measures its activities and performance, including the possibility of using benchmark data from federal agencies with similar functions for purposes of comparison and analysis; provide a more complete picture of OOC’s workload by improving how it tracks and reports on single-issue large group requests for counseling and mediation; work with the Congress to develop a strategy to ensure that all facilities under OOC’s jurisdiction and located in the Capitol Hill complex and the surrounding Washington, D.C. area—including the Senate and House page dormitories, and LOC’s National Library Service for the Blind and Physically Handicapped—are covered as part of the biennial safety inspections required by the CAA; establish a clearinghouse for sharing best practice information on topics covered by the CAA; work with the Congress to determine the feasibility of using such mechanisms as feedback surveys and focus groups to provide valuable information on the actual level of awareness among target populations concerning OOC’s programs and activities; outreach to other groups and forums such as the Senate Administrative Managers-Chief Clerks Steering Committee; use data on the number and type of complaints received by OOC to better target education and information distribution efforts; and develop capacity to use safety and health data to facilitate risk-based decision making. Effective communication and coordination with both stakeholders and clients is essential for organizations to operate effectively, manage risks, and achieve results. We have previously identified the ability of federal agencies to engage in relevant, reliable, and timely communication relating to internal and external events as fundamental for effective management control. At OOC, an effective communications strategy could provide a powerful tool to seek mutual understanding among OOC, key stakeholders, and legislative agencies concerning its mission and role, convey OOC’s recent initiatives and improvement efforts, obtain information about the external environment that may affect the Office’s ability to achieve its mission, as well as build up trust among the stakeholders and clients that is necessary for the Office to realize its goal of becoming more collaborative and partnerial. OOC has recently undertaken a number of important initiatives to improve communications and coordination. Consistent with, and building upon those initiatives, we identified two specific areas where OOC can continue to make improvements in the way in which it communicates with other entities and organizations including: ensuring clear, regular, and timely consultation with congressional communicating and coordinating with agencies openly and effectively. A key component of an effective communications strategy is communicating with, and obtaining information from, external stakeholders. OOC’s leaders recognize the importance of communicating with stakeholders and have taken steps to expand their efforts in this area. However, our interviews with a wide range of congressional stakeholders, including majority and minority staff in both the Senate and the House, indicate that OOC’s efforts to effectively consult with the Congress have been uneven and additional efforts are needed. On the one hand, some congressional staff—but by no means all—told us that they believed OOC has made efforts to develop more transparent and collegial working relationships with congressional stakeholders. For example, a congressional staff member cited the efforts of OOC's General Counsel to reach out to congressional staff shortly after he joined the Office in May 2003. Another staff member cited an example where OOC worked constructively with his office to resolve a potential fire safety problem involving the placement of furniture. This staff member appreciated OOC's willingness to discuss the issue with his office and work to find a satisfactory solution that would both comply with safety requirements and take into account the need of his office to continue to conduct business and the physical limitations of the space involved. On the other hand, a number of staff, including some who acknowledged and appreciated OOC’s recent efforts, said that they remained unclear about the office’s role and services, how it makes decisions, and related matters. Moreover, several congressional staff members told us that they have not seen much outreach from OOC, or that the outreach they did experience was inconsistent and could be more effective. For example, one individual told us that he only received information from OOC when it was interested in proposing a legislative change that would require the cooperation of the Congress. Another concern cited by several staff was the perception that OOC did not make an effort to ensure that they were informed “at the front end” concerning significant activities and initiatives. Thus, the concern is not so much the existence of OOC’s operating procedures. Rather, the concern is communication as those procedures are being applied and OOC undertakes its daily operations. Effective communications strategies take into account how to most effectively communicate the message given their intended audience. For example, in September 2003 OOC initiated a formal rulemaking process to amend parts of its operating procedures. As required by the CAA, OOC’s Board of Directors submitted an announcement for inclusion in the Congressional Record announcing proposed changes in OOC’s procedural rules and inviting comment. A key staff member said that it would have been more helpful—and could have avoided, or at least limited, subsequent concerns with the process used to issue the draft rules—if OOC had more fully reached out to key committees and Members before the draft proposal was announced publicly. Another staff member told us that, at a minimum, it would have been helpful if OOC followed the notice by contacting them directly to ensure that they were aware of the proposed rules and the subsequent 30-day comment period, explaining that such announcements are easy to miss if one is not looking for them. Although OOC’s initial posting met its legal obligations, the Office decided to place another notice and extend the comment period. To encourage additional feedback from stakeholders and other interested parties, OOC’s Board decided to hold a public hearing on the proposed changes even though the CAA does not require it. According to OOC, the decision to hold the hearing was consistent with feedback OOC had received several years earlier from some congressional stakeholders. However, instead of creating an opportunity for stakeholders to provide additional feedback, the Board had to cancel the session because only one person had agreed to speak at the hearing. Congressional staff told us that the Congress’ lack of participation in the hearing was not an indication of a lack of interest in the issues to be discussed, but was due to concerns about the nature and structure of the forum. OOC had not informed congressional staff of its intention to seek additional comment in this way. Moreover, one congressional stakeholder said that OOC’s approach to solicit additional comments through a public hearing was inappropriate. Communication protocols provide a potentially valuable tool that organizations can use to avoid such surprises and help foster clearer understanding with stakeholders. For example, after working closely with the Congress and after a trial phase, GAO implemented congressional protocols in November 2000. From our experiences in developing the protocols, we have identified key lessons and success factors—that developing protocols is a time-consuming process which involves (1) personal commitment and direction from the agency head, (2) senior management participation and buy-in, and (3) continuous outreach to and feedback from external stakeholders. Despite the time and the effort, however, our experience using protocols as a transparent, documented, and consistent way to set priorities has been very positive for us as well as our congressional clients. Similarly, for OOC such protocols could help foster an understanding of its goals, functions, and procedures with its congressional stakeholders. Open and effective coordination with legislative agencies and other stakeholders including employee groups is another critical component of an effective communications strategy at OOC. We have previously reported that organizations can develop and refine their operations and better achieve results by establishing channels that facilitate open and effective communication with clients and other recipients of their services and activities. Several officials at legislative branch agencies covered by the CAA told us that at points over the more than 8 years since OOC's has been in operation, communications and interactions among the Office and their agencies have not been good. These agency officials told us that some of OOC's past actions had created distrust and had fostered the belief among some staff that OOC was more interested in making them look bad (by using a “gotcha” approach) rather than working with agencies to comply with the CAA and create a better workplace. In contrast, the union groups we met with generally characterized their interactions with OOC as positive throughout this period. OOC’s Board and leadership are aware of the concerns expressed by some legislative agencies and have taken steps to address them. As a result, officials we interviewed at these agencies generally agreed that over the last year or two, OOC has taken steps to improve the working relationships with their respective offices. These officials cited efforts by OOC's senior executives to reach out to them through a series of meetings held by the Executive Director and General Counsel as evidence of a new, more constructive attitude towards legislative agencies. For example, in the area of occupational safety and health enforcement, an AOC official told us that OOC’s General Counsel had initiated several meetings with AOC to discuss possible initiatives to improve health and safety on Capitol Hill. Included in these discussions was AOC’s recent decision to adopt an internal IT application to assist the agency in tracking and monitoring potential safety and health problems before complaints are made to OOC. According to both the AOC official and OOC’s General Counsel, the two organizations have had preliminary discussions on the possibility of sharing such health and safety data, although they have yet to come to an agreement on whether or how to do so. Despite some advances, our interviews with agency officials identified several areas where OOC needs to make improvements in how it communicates and coordinates with agencies covered by the CAA. Among the continuing problems officials mentioned were (1) OOC’s failure to always follow its own rules and procedures when conducting investigations of health and safety complaints and (2) the lack of timely and consistent follow-up on the status and disposition of investigations conducted by the Office and clear communication with agency officials. For example, several agency officials told us that OOC was not always consistent in the manner in which it conducts investigations of occupational safety and health-related complaints, occasionally failing to follow its own processes and procedures. The CAA gives OOC’s General Counsel considerable authority to inspect and investigate places of employment under the jurisdiction of employing agencies covered by the CAA and does not require him to provide advance notice before starting an investigation or visiting buildings or facilities. However, to foster a constructive working relationship with the agencies they regulate, OOC’s General Counsel and a staff member told us that it has been a long-standing policy of OOC to immediately notify the agency involved when the Office receives a safety and health-related complaint. In addition, except in cases of emergency or when any delay might pose a danger, it is also OOC's policy to give agencies the option of attending an opening conference before proceeding with the investigation. However, officials at two agencies told us that there have been cases, including some within the past year, where OOC has failed to follow its policy on agency notification. They said these instances have contributed to misunderstandings, confusion, and, in at least one case, the perception among senior agency officials “that a ‘gotcha’ mentality still exists at OOC.” While OOC and the agency involved in this last case do not agree concerning the facts and significance of OOC’s actions that led to this comment on the part of the agency, the situation provides an illustration of the differences that exist in perceptions between OOC and some agency officials. In addition, agency officials told us that often OOC would not follow up with agency officials on the status and disposition of investigations in a timely or consistent manner. For example, agency officials told us of cases where, after meeting with OOC staff to discuss the findings of a particular investigation and responding with a plan to address the issues, they did not hear back from OOC for months, or in several instances, for a year or more. The agency official we spoke with explained that this absence of closure complicated efforts to resolve the current status of cases. Our review of OOC's procedure manual for handling safety and health- related complaints found that it did not provide a clear, complete, and up- to-date source of OOC's policies and procedures on how the Office responds to such complaints. In addition, the manual did not provide clear time frames on when OOC would communicate to agencies during this process. For example, the manual has not been updated since 1997 and does not contain any specific language on the Office's policy of providing agencies with opening conferences as described to us by OOC's General Counsel and his staff. In response to follow-up requests, OOC staff did provide us with a separate one-page document, dated May 1999, which described topics discussed at an opening conference. However, this document also did not clearly set forth OOC's policy on agency notification, and it was not clear how it was used and how widely it had been distributed. OOC’s General Counsel has recently acknowledged the need to revise and update these procedures, but he told us that because of other needs he has not given this a high priority. To establish channels that facilitate open and effective communication, organizations need to clearly set out procedures—such as communication protocols—that they will consistently follow when doing their work. For example, building on the foundation of the congressional protocols we developed in 2000, GAO launched the pilot phase of our agency protocols in 2002 that contain clearly defined and transparent policies and practices on how we carry out our work at federal agencies. These protocols identify what agencies can expect from GAO and what GAO expects of agencies. Toward this end, our protocols present information on the framework of GAO’s engagement and audit activities—including communication between GAO and agencies, interactions during the course of GAO’s work, and follow-up on GAO’s recommendations—and contain a description of the specific actions and activities we will take at each stage as well as specific time frames when appropriate. In this way the protocols are intended to help ensure the consistency, fairness, and effectiveness of interactions between GAO and the agencies with which it works. Rather than being just a paperwork exercise, the development of agency protocols that clearly and accurately communicate OOC’s current policies and procedures can be an important tool to assist OOC’s management achieve its commitment to communicate more openly and effectively with legislative agencies. In addition, protocols can have a significant impact on OOC's ability to work constructively and fairly with the agencies it regulates, and to accomplish its overall mission goals. Both OOC’s Board of Directors and its senior executives recognize the importance of communicating with stakeholders and have begun to make efforts in this area. Consistent with that commitment, we recommend that OOC take the following steps: Develop congressional protocols, in close consultation with congressional stakeholders, that would document agreements between the Congress and OOC on what congressional stakeholders can expect as the Office carries out its work. Protocols help to ensure that OOC deals with its congressional stakeholders using clearly defined, consistently applied, and transparent policies and procedures. They can also help OOC reach agreement on the best mix of products and services to achieve its mission. It is important to note that consulting with stakeholders is not the same as seeking their acceptance or approval on matters where that would not be appropriate. The purpose of such protocols is to help create a basic understanding of OOC’s goals, functions, and procedures; and what OOC will communicate to whom, when, and how, without compromising the independence the Congress gave OOC to enforce the CAA. Develop agency protocols, in cooperation with legislative agencies, that would clarify and clearly communicate the procedures OOC will follow when interacting with agencies while carrying out its work. In both cases OOC should carefully pilot the protocols before they are fully implemented so that OOC, the Congress, and legislative agencies can gain valuable experiences in their application and that appropriate adjustments can be made. We also recommend that the Executive Director and the General Counsel review and revise OOC’s case handling policies and procedures, such as OOC’s procedure manual for handling safety and health-related complaints, and ensure that they are effectively communicated to appropriate legislative agency officials. The creation of an enhanced control environment forms the foundation for an organization’s ability to put in place the management controls necessary for effective and efficient operations. Well-managed organizations establish and maintain an environment that sets a positive and supportive attitude toward internal control and conscientious management. In our previous work, we have identified several key factors that affect an organization’s ability to create such an environment including its organizational and leadership structure and its ability to effectively manage and develop its human capital. OOC faces challenges in both of these areas, which it needs to successfully overcome in order to exercise effective management control. We have previously reported that federal agencies have used performance agreements between senior political and career executives as a tool to define accountability for specific goals, monitor progress, and contribute to performance evaluations. Congress has also recognized the role that performance agreements can play in holding organizations and executives accountable for results. For example, in 1998, the Congress chartered the Office of Student Financial Assistance as a performance-based organization and required the agency to implement performance agreements. In addition to providing OOC’s Board with a mechanism to increase accountability, performance agreements would also provide the platform for ongoing dialogue to help ensure that the goals and priorities contained in OOC’s strategic plan are carried out by its top executives and help ensure the proper alignment among daily operations and activities and the broader results OOC strives to achieve. Since it was created in 1995, OOC has operated without having any formal performance management system for its Executive Director and General Counsel. Starting in 2003, OOC’s Board of Directors required these officials to prepare an annual self-assessment that they submit to the Board for review. The Executive Director and General Counsel prepare narratives assessing themselves in five performance categories: operational management, external relations, ethics, strategic planning, and Board relations. These narratives then form the basis for a subsequent informal review session with the Board. The development of these self-assessments is an important first step in improving the performance and assuring the accountability of OOC and its executive team. OOC’s current efforts to develop a strategic plan provide an ideal opportunity for the Office to build on this first step. Once OOC has reached agreement with its stakeholders and has completed its strategic plan, it can take the next step and develop results-oriented performance agreements with its senior executives that are directly linked to organizational goals embodied in its strategic plan—the absence of which is a major limitation of the current effort. We have reported on a number of benefits of performance agreements that may have direct importance to achieving improved performance at OOC. Performance agreements have: Strengthened alignment of results-oriented goals with daily operations. Performance agreements define accountability for specific goals and help to align daily operations with agencies' results-oriented, programmatic goals. Fostered collaboration across organizational boundaries. Performance agreements encourage executives to work across traditional organizational boundaries or "silos" by focusing on the achievement of results-oriented goals. Enhanced opportunities to discuss and routinely use performance information to make program improvements. Performance agreements facilitate communication about organizational performance, and provide opportunities to pinpoint improved performance. Provided a results-oriented basis for individual accountability. Performance agreements provide results-oriented performance information to serve as the basis for executive performance evaluations. Maintained continuity of program goals during leadership transitions. Performance agreements help to maintain a consistent focus on a set of broad programmatic priorities during changes in leadership. In addition to assuring accountability and alignment of operations to results, performance agreements could help OOC ensure it maintains a common and consistent vision and approach to the implementation of the CAA. In the past, a lack of such a common vision on how OOC should approach the enforcement of workplace health and safety requirements or interact with stakeholders resulted in clashes between the Executive Director and the previous General Counsel. Specifically, a number of congressional and legislative agency officials we interviewed had the perception that a previous General Counsel’s emphasis on a strict “gotcha” approach toward enforcement led to a combative and adversarial relationship with legislative agencies and other stakeholders that was at odds with the more collaborative approach supported by OOC’s Executive Director. OOC’s current Board, Executive Director, and General Counsel told us that they share a common commitment to pursuing a collaborative and constructive approach towards enforcing the CAA. OOC’s recent effort to develop a strategic plan is a reflection of the common vision of the organization’s mission, goals, and operational approach shared by OOC’s current leaders. In addition, they appear to enjoy good working relationships among themselves. However, the standards of effective management control and OOC’s own past experience demonstrate the need for the office to take appropriate steps to address the OOC’s organizational structure and presents a challenge to effective management control. Effective human capital management is an important factor contributing to management control as well as an organization’s ability to achieve results. We have identified two human capital challenges currently facing OOC: (1) the need to ensure leadership continuity and preserve critical organizational knowledge in the face of the impending loss of a large number of leaders over the next 2 years and (2) the need to establish a modern, effective, and credible performance management system with appropriate safeguards for all OOC employees. Sustained focus and direction from top leadership is a key component of effective management. Management control requires that organizations consider the effect upon their operations if a large number of employees— including executives and other leaders—are expected to leave and then establish criteria for a retention or mitigation strategy. OOC currently faces a considerable loss of knowledge and leadership capacity due to impending turnover of its Board of Directors. This expected loss is the result of CAA provisions that limit current Board members to a single 5-year term. For example, within the next year-and-a- half all five members of the current Board will reach the end of their terms. When the Congress crafted the CAA, it included a provision to provide for staggered terms for OOC’s Board. However, delays in the appointment of successors to the original group of board members resulted in the appointment of several new members at the same time. Specifically, the Chair and two members of the five-member Board were appointed in October 1999 and are scheduled to complete their terms in September 2004. The terms of the two remaining Board members will end eight months later in May 2005. The situation is only slightly better for OOC’s four appointed executives. Similar to the Board, the CAA restricts OOC’s four appointed executives to nonrenewable 5-year terms of service. In addition, this restriction also prevents the possibility of having a deputy executive director serve in the role of executive director, making the potential of succession planning among this group of executives impossible. The terms of all but the General Counsel will expire within 6 months of each other in 2006. If one considers both OOC’s Board and its senior executives together, eight out of nine of the organization’s top officials will have left by September 2006. The loss of such a large proportion of OOC’s senior leadership within a relatively short period will likely result in a loss of leadership continuity, institutional knowledge, and expertise that has the potential of adversely impacting OOC’s performance at least in the short term. Other federal agencies with functions similar to OOC do not restrict their board members from serving subsequent terms. For example, there are no statutory restrictions on the five board members of the EEOC and the three board members of the Federal Labor Relations Authority from serving addition terms. In addition, the statute governing the National Labor Relations Board permits the five board members to be reappointed. We have previously reported that performance management systems can create a “line of sight” showing how team, unit, and individual performance can contribute to overall organizational results. An explicit alignment of daily activities with broader results is one of the defining features of effective performance management systems in high-performing organizations. Organizations naturally need to develop performance management systems that reflect their specific structures and priorities. Given OOC’s small size and specific situation, it is important that it considers these and other key practices in the context of its own needs, capabilities, and circumstances. In September 2002, OOC rolled out its first formal performance management system to staff who report to the Executive Director. OOC assesses employees on eight performance dimensions: (1) job knowledge and technical skills, (2) overall quality of work, (3) employee and professional relationships, (4) planning and organization, (5) work habits, (6) judgment, (7) initiative and creativity, and (8) development. Supervisors are to meet with their staff twice a year to provide ratings and feedback on the previous 6-month assessment period. They also are to hold an interim meeting halfway through each assessment period. For these eight performance dimensions, supervisors give each employee two separate ratings—the first describes the employee’s overall achievement in the performance dimension, and the second represents the progress of the employee toward achieving specific goals established at the start of the evaluation cycle. On one hand, OOC’s decision to establish a formal performance management system covering at least some of its employees represents a good first step and its performance management system exhibits some positive characteristics. For example, OOC’s requirement that supervisors and employees meet at least four times a year to discuss the employees’ recent performance and individual goals provides regular opportunities for staff to discuss and act on feedback. On the other hand, there are areas where the system can be improved as OOC’s efforts in this area move forward. For example, OOC’s current performance management system assesses staff against the eight performance dimensions identified above without providing specific standards or detailed descriptions of the behaviors associated with varying levels of performance. For instance, for the performance dimension “overall quality of work,” the only descriptive standard provided is “consistently produces competent work.” OOC should explore the usefulness of including descriptions of competencies—those specific skills or supporting behaviors that employees are expected to demonstrate as they carry out their work—in its performance management system to provide a basis for making judgements about an individual’s performance and contribution to OOC’s results. In addition, OOC’s current performance management system does not apply to the General Counsel or any OOC attorneys who report to him. These employees continue to work without any formal performance management system in place. Moving forward, the involvement of employees will be crucial to the success of any efforts by OOC to create a new performance management system or reform and expand its existing one. Given OOC’s small size, the cost in time and effort to obtain such feedback likely could be minimal. Congress should consider making legislative changes to the CAA to help ensure that OOC maintains institutional continuity into the future. Specifically, the Congress should consider amending the CAA to allow: Board members to be reappointed to an additional term, and the Executive Director, General Counsel, and the two Deputy Executive Directors to be reappointed to serve subsequent terms in either the same or a different position, if warranted and the Congress so desires. Any reappointments should be contingent on an individual’s demonstrated performance and achievement of goals as documented in executive performance agreements for OOC’s Executive Director and General Counsel, as recommended below, or another performance management system in the case of OOC’s two Deputy Executive Directors. We recommend that OOC’s Board: Require performance agreements between the Board of Directors and OOC’s Executive Director and General Counsel to help translate the Office’s strategic goals into day-to-day operations and to hold these executives accountable for achieving program results. We recommend that OOC’s Executive Director and General Counsel: Establish a modern, effective, and credible performance management system with appropriate safeguards for all OOC employees. OOC should build on the first step of establishing a basic performance management system for employees reporting to the Executive Director by ensuring that all employees, including those who report to the General Counsel, participate in an individual performance management system. In addition, OOC should look for ways to develop a more robust and effective approach to individual performance management by considering key practices employed by leading organizations. Actively involve all OOC’s employees in this process, whether it entails the revision and expansion of its existing performance management system or the creation of an entirely new initiative. On January 22, 2004, we provided a draft of this report to OOC’s Board of Directors, Executive Director, and General Counsel for their review and comment. We received written comments prepared jointly by the Board of Directors, Executive Director, and General Counsel on January 26, 2004. In their joint response, OOC generally agreed with the contents of this report, noting that the Office has begun to adopt many of our recommendations as part of its strategic planning process and current programmatic initiatives. Furthermore, the Board of Directors strongly supports our statement that the Congress should consider amending the CAA to allow OOC’s Board members to serve an additional term and to allow the Executive Director, General Counsel, and the two Deputy Executive Directors to be reappointed to serve additional terms in either the same or a different position, if warranted and desired. As mentioned in their response and as acknowledged in our report, we have provided information and assistance to OOC regarding their management control improvement efforts and we plan to continue working with OOC’s leadership and meet with them regularly to discuss their progress. Their written response is reprinted in appendix II. In addition, OOC’s Executive Director and General Counsel provided minor technical clarifications, and we made those changes where appropriate. We will provide copies of this report to other interested congressional committees, and the Office of Compliance. In addition, we will make copies available to others upon request. The report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me or Steven Lozano on (202) 512-6806 or on [email protected] and [email protected]. Major contributors to this report were Jeff Dawson, Peter J. Del Toro, Jeffery Bass, Bruce Goddard, and Jeff McDermott. To meet our objective of assessing key management controls in place at the Office of Compliance (OOC) and identify what improvements, if any, could be taken to strengthen OOC’s effectiveness and efficiency, we followed a multipronged approach. First, we analyzed applicable laws, legislative history, rules, and regulations; and obtained and analyzed written documentation of guidance, policies, procedures, and performance of OOC. Second, to understand the complex operating environment and long- standing challenges facing the agency, we conducted a series of interviews with agency officials, key stakeholders, and officials from agencies covered by the Congressional Accountability Act (CAA). To obtain OOC’s perspectives on its operations and the challenges it faces, we interviewed OOC’s Board of Directors as well as each of its top executives—the Executive Director, General Counsel, Deputy Executive Director for the Senate, and Deputy Executive Director for the House. We also met with all of OOC’s managers including the Deputy General Counsel, Director of Counseling, and Budget and Administrative Officer. To understand how key stakeholders perceive the OOC, we conducted 19 interviews with selected majority and minority congressional staff from both the Senate and House. Among those we interviewed were staff from Senate and House leadership offices, Senate and House Subcommittees on Legislative Branch Appropriations, Committee on Governmental Affairs, Committee on House Administration, Office of the Senate Employment Counsel, Senate Sergeant-At-Arms, Senate Administrative Managers Group, Office of the Clerk of the House, Office of the House Employment Counsel, Office of the Chief Administrative Officer of the House, House Inspector General, as well as personal staff of several senators and representatives. We also spoke with cognizant officials from agencies covered by the CAA to obtain their views of the performance of the Office. These included the Architect of the Capitol, the Congressional Budget Office, the United States Capitol Police, the Office of the Attending Physician, the Library of Congress, and GAO. To obtain the perspectives of organized labor and employee groups we spoke with two of the largest unions representing employees in legislative agencies—the Association of Federal, State, County, and Municipal Employees, and the Fraternal Order of Police. In addition, we conducted selected reliability and validity reviews of OOC’s dispute resolution process database. For these reviews, we questioned OOC staff about their internal controls for their dispute resolution database. We then drew a random sample of 5 cases out of a total field of 44 cases reported as closed in the database for 2002 and compared the electronic data to source documents. We also examined whether OOC was processing cases within statutorily defined thresholds for key process phases. The OOC’s responses to our questions and the results of this comparison led us to conclude that the data were sufficiently reliable for the purposes of our report. We also drew on key management practices and guidance identified in previously-issued GAO reports, where appropriate. As part of a process of constructive engagement, we provided OOC with briefings, reports, and examples of best practices in the areas we reviewed. For example, at the OOC’s request, GAO officials provided briefings on our approach to strategic planning and we provided copies of our strategic planning documents. On January 22, 2004, we provided a draft of this report to OOC’s Board of Directors, Executive Director, and General Counsel for their review and comment. We received written comments prepared jointly by the Board of Directors, Executive Director, and General Counsel on January 26, 2004. Their written response is reprinted in appendix II. OOC also provided technical comments that we have incorporated where appropriate. We performed our work in Washington, D.C., from January 2003 through January 2004 in accordance with generally accepted government auditing standards. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Consolidated Appropriations Resolution of 2003 Conference Report mandated that GAO review the Office of Compliance (OOC), an independent legislative branch agency created by the Congressional Accountability Act of 1995 (CAA). OOC, a 15-person office with about $2 million in expenditures during fiscal year 2003, administers and enforces various CAA provisions related to fair employment and occupational safety and health among certain legislative branch agencies. OOC's current Executive Director has been in place since April 2001 and its General Counsel joined the Office in June 2003. The mandate directed GAO to assess the OOC's overall effectiveness and efficiency and to make recommendations, as appropriate. OOC is in the early stages of a concerted and vitally needed effort to improve and strengthen management control across the Office and to carry out its mission more effectively and efficiently while safeguarding its institutional independence. OOC's success in completing this important effort depends upon making significant progress on a number of key management control areas: Sharpen focus on results. OOC's current strategic planning initiative is beginning to address the more fundamental question of the Office's effectiveness rather than the Office's traditional focus on activities and outputs, such as the number of cases processed and inspections conducted. OOC's planning initiative can also provide a vehicle for engaging and consulting with key congressional and other stakeholders on OOC's purposes, how those purposes will be achieved, how progress will be assessed, and for sustaining feedback on what progress is being made and what additional improvement opportunities exist. The planning initiative is still very much a work in progress and continued efforts are needed in a number of key areas including developing results oriented performance measures. Ensuring an effective program structure. As OOC shifts its focus from outputs and activities to results, it must put in place a more effective program structure that includes new ways of doing business. OOC has taken a number of actions to administer the CAA, such as managing a dispute resolution process and conducting investigations and inspections to ensure compliance with safety and health standards. However, OOC is not fully in compliance with the CAA's requirement concerning biennial safety and health inspections of legislative branch agency facilities. OOC also needs to expand on recent efforts to develop programs that are based on collaboration with legislative branch agencies. Building effective communication emphasizing outreach and coordination. OOC's congressional and other stakeholders whom we interviewed said that OOC recently has used a more collaborative approach rather than the "gotcha" approach of the past. On the other hand, several agency officials said that current interactions with OOC could be improved. To facilitate more effective communications, OOC should establish congressional and agency protocols to document agreements between the Congress, legislative branch agencies, and OOC on what can be expected as OOC carries out its work. Creating and sustaining an enhanced management control environment. Since its creation, OOC has operated without having any formal performance management system for its Executive Director and General Counsel. OOC should establish an enhanced management control environment and strengthen accountability by requiring performance agreements between the Board and both the Executive Director and General Counsel, as well as expanding and improving on OOC's performance management system for all staff. Another important challenge concerns the lack of institutional continuity that may occur due to statutory term limits on OOC's leadership positions. To prevent the loss of critical organizational knowledge, the Congress should consider changing the term limits contained in the CAA.
We substantiated the allegation of gross mismanagement of property at IHS. Specifically, we found that thousands of computers and other property, worth millions of dollars, have been lost or stolen over the past several years. We analyzed IHS reports for headquarters and the 12 regions from the last 4 fiscal years. These reports identified over 5,000 property items, worth about $15.8 million, that were lost or stolen from IHS headquarters and field offices throughout the country. The number and dollar value of this missing property is likely much higher because IHS did not conduct full inventories of accountable property for all of its locations and did not provide us with all inventory documents as requested. Despite IHS attempts to obstruct our investigation, our full physical inventory at headquarters and our random sample of property at 7 field locations, identified millions of dollars of missing property. Our analysis of Report of Survey records from IHS headquarters and field offices show that from fiscal year 2004 through fiscal year 2007, IHS property managers identified over 5,000 lost or stolen property items worth about $15.8 million. Although we did receive some documentation from IHS, the number and dollar value of items that have been lost or stolen since 2004 is likely much higher for the following reasons. First, IHS does not consistently document lost or stolen property items. For example, 9 of the 12 IHS regional offices did not perform a full physical inventory in fiscal year 2007. Second, an average of 5 of the 12 regions did not provide us with all of the reports used to document missing property for each year since fiscal year 2004, as we requested. Third, we found about $11 million in additional inventory shortages from our analysis of inventory reports provided to us by IHS, but we did not include this amount in our estimate of lost or stolen property because IHS has not made a final determination on this missing property. Some of the egregious examples of lost or stolen property include: In April 2007, a desktop computer containing a database of uranium miners’ names, social security numbers, and medical histories was stolen from an IHS hospital in New Mexico. According to an HHS report, IHS attempted to notify the 849 miners whose personal information was compromised, but IHS did not issue a press release to inform the public of the compromised data. From 1999 through 2005, IHS did not follow required procedures to document the transfer of property from IHS to the Alaska Native Tribal Health Consortium, resulting in a 5-year unsuccessful attempt by IHS to reconcile the inventory. Our analysis of IHS documentation revealed that about $6 million of this property—including all-terrain vehicles, generators, van trailers, pickup trucks, tractors, and other heavy equipment—was lost or stolen. In September 2006, IHS property staff in Tucson attempted to write off over $275,000 worth of property, including Jaws of Life equipment valued at $21,000. The acting area director in Tucson refused to approve the write-off because of the egregious nature of the property loss. To substantiate the whistleblower’s allegation of missing IT equipment, we performed our own full inventory of IT equipment at IHS headquarters. Our results were consistent with what the whistleblower claimed. Specifically, of the 3,155 pieces of IT equipment recorded in the records for IHS headquarters, we determined that about 1,140 items (or about 36 percent) were lost, stolen, or unaccounted for. These items, valued at around $2 million, included computers, computer servers, video projectors, and digital cameras. According to IHS records, 64 of the items we identified as missing during our physical inventory were “new” in April 2007. During our investigation of the whistleblower’s complaint, IHS made a concerted effort to obstruct our work. For example, the IHS Director over property misrepresented to us that IHS was able to find about 800 of the missing items from the whistleblower’s complaint. In addition, an IHS property specialist attempted to provide documentation confirming that 571 missing items were properly disposed of by IHS. However, we found that the documentation he provided was not dated and contained no signatures. Finally, IHS provided us fabricated receiving reports after we questioned them that the original reports provided to us were missing key information on them. Figure 1 shows the fabricated receiving report for a shipment of new scanners delivered to IHS. ? As shown in figure 1, there is almost a 3-month gap between the date the equipment was received in September and the date that the receiving report was completed and signed in December—even though the document should have been signed upon receipt. In fact, the new receiving report IHS provided was signed on the same date we requested it, strongly suggesting that IHS fabricated these documents in order to obstruct our investigation. We selected a random sample of IT equipment inventory at seven IHS field offices to determine whether the lack of accountability for inventory was confined to headquarters or occurred elsewhere within the agency. Similar to our finding at IHS headquarters, our sample results also indicate that a substantial number of pieces of IT equipment were lost, stolen, or unaccounted for. Specifically, we estimate that for the 7 locations, about 1,200 equipment items or 17 percent, with a value of $2.6 million were lost, stolen or unaccounted for. Furthermore, some of the missing equipment from the seven field locations could have contained sensitive information. We found that many of the missing laptops were assigned to IHS hospitals and, therefore, could have contained patient information and social security numbers and other personal information. IHS has also exhibited ineffective management over the procurement of IT equipment, which has led to wasteful spending of taxpayer funds. Some examples of wasteful spending that we observed during our audit of headquarters and field offices include: Approximately 10 pieces of IT equipment, on average, are issued for every one employee at IHS headquarters. Although some of these may be older items that were not properly disposed, we did find that many employees, including administrative assistants, were assigned two computer monitors, a printer and scanner, a blackberry, subwoofer speakers, and multiple computer laptops in addition to their computer desktop. Many of these employees said they rarely used all of this equipment, and some could not even remember the passwords for some of their multiple laptops. IHS purchased numerous computers for headquarters staff in excess of expected need. For example, IHS purchased 134 new computer desktops and monitors for $161,700 in the summer of 2007. As of February 2008, 25 of these computers and monitors—valued at about $30,000—were in storage at IHS headquarters. In addition, many of the computer desktops and monitors purchased in the summer of 2007 for IHS headquarters were assigned to vacant offices. The lost or stolen property and waste we detected at IHS can be attributed to the agency’s weak internal control environment and its ineffective implementation of numerous property policies. In particular, IHS management has failed to establish a strong “tone at the top” by allowing inadequate accountability over property to persist for years and by neglecting to fully investigate cases related to lost and stolen items. Furthermore, IHS management has not properly updated its personal property management policies, which IHS has not revised since 1992. Moreover, IHS did not (1) conduct annual inventories of accountable property; (2) use receiving agents for acquired property at each location and designate property custodial officers in writing to be responsible for the proper use, maintenance, and protection of property; (3) place barcodes on accountable property to identify it as government property; (4) maintain proper individual user-level accountability, including custody receipts, for issued property; (5) safeguard IT equipment; or (6) record certain property in its new property management information system. To strengthen IHS’s overall control environment and “tone at the top”, we made 10 recommendations to IHS to update its policy and enforce property management policies of both the HHS and IHS. Specifically, we recommended that the Director of IHS should direct IHS property officials to take the following 10 actions: Update IHS personal property management policies to reflect any policy changes that have occurred since the last update in 1992. Investigate circumstances surrounding missing or stolen property instead of writing off losses without holding anyone accountable. Enforce policy to conduct annual inventories of accountable personal property at headquarters and all field locations. Enforce policy to use receiving agents to document the receipt of property and distribute the property to its intended user and to designate property custodial officers in writing to be responsible for the proper use, maintenance, and protection of property. Enforce policy to place bar codes on all accountable property. Enforce policy to document the issuance of property using hand receipts and make sure that employees account for property at the time of transfer, separation, change in duties, or on demand by the proper authority. Maintain information on users of all accountable property, including their buildings and room numbers, so that property can easily be located. Physically secure and protect property to guard against loss and theft of equipment. Enforce the use of the property management information system database to create reliable inventory records. Establish procedures to track all sensitive equipment such as blackberries and cell phones even if they fall under the accountable dollar threshold criteria. HHS agreed with 9 of the 10 recommendations. HHS disagreed with our recommendation to establish procedures to track all sensitive equipment such as blackberries and cell phones even if they fall under the accountable dollar threshold criteria. We made this recommendation because we identified examples of lost or stolen equipment that contained sensitive data, such as a PDA containing medical data for patients at a Tucson, Arizona area hospital. According to an IHS official, the device contained no password or data encryption, meaning that anyone who found (or stole) the PDA could have accessed the sensitive medical data. While we recognize that IHS may have taken steps to prevent the unauthorized release of sensitive data and acknowledge that it is not required to track devices under a certain threshold, we are concerned about the potential harm to the public caused by the loss or theft of this type of equipment. Therefore, we continue to believe that such equipment should be tracked and that our recommendation remains valid. Mr. Chairman and Members of the Committee, this concludes our statement. We would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition to the individual named above, the individuals who made major contributions to this testimony were Verginie Amirkhanian, Erika Axelson, Joonho Choi, Jennifer Costello, Jane Ervin, Jessica Gray, Richard Guthrie, John Kelly, Bret Kressin, Richard Kusman, Barbara Lewis, Megan Maisel, Andrew McIntosh, Shawn Mongin, Sandra Moore, James Murphy, Andy O’Connell, George Ogilvie, Chevalier Strong, Quan Thai, Matt Valenta, and David Yoder. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In June 2007, GAO received information from a whistleblower through GAO's FraudNET hotline alleging millions of dollars in lost and stolen property and gross mismanagement of property at Indian Health Service (IHS), an operating division of the Department of Health and Human Services (HHS). GAO was asked to conduct a forensic audit and related investigations to (1) determine whether GAO could substantiate the allegation of lost and stolen property at IHS and identify examples of wasteful purchases and (2) identify the key causes of any loss, theft, or waste. GAO analyzed IHS property records from fiscal years 2004 to 2007, conducted a full physical inventory at IHS headquarters, and statistically tested inventory of information technology (IT) equipment at seven IHS field locations in 2007 and 2008. GAO also examined IHS policies, conducted interviews with IHS officials, and assessed the security of property. Millions of dollars worth of IHS property has been lost or stolen over the past several years. Specifically: IHS identified over 5,000 lost or stolen property items, worth about $15.8 million, from fiscal years 2004 through 2007. These missing items included all-terrain vehicles and tractors; Jaws of Life equipment; and a computer containing sensitive data, including social security numbers. GAO's physical inventory identified that over 1,100 IT items, worth about $2 million, were missing from IHS headquarters. These items represented about 36 percent of all IT equipment on the books at headquarters in 2007 and included laptops and digital cameras. Further, IHS staff attempted to obstruct GAO's investigation by fabricating hundreds of documents. GAO also estimates that IHS had about 1,200 missing IT equipment items at seven field office locations worth approximately $2.6 million. This represented about 17 percent of all IT equipment at these locations. However, the dollar value of lost or stolen items and the extent of compromised data are unknown because IHS does not consistently document lost or stolen property, and GAO only tested a limited number of IHS locations. Information related to cases where GAO identified fabrication of documents and potential release of sensitive data was referred to the HHS Inspector General for further investigation. GAO identified that the loss, theft, and waste can be attributed to IHS's weak internal control environment. IHS management has failed to establish a strong "tone at the top," allowing property management problems to continue for more than a decade with little or no improvement or accountability for lost and stolen property and compromise of sensitive personal data. In addition, IHS has not effectively implemented numerous property policies, including the proper safeguards for its expensive IT equipment. For example, IHS disposed of over $700,000 worth of equipment because it was "infested with bat dung."
EPA offers two types of grants—nondiscretionary and discretionary: Nondiscretionary grants support water infrastructure projects, such as renovating municipal drinking water facilities, and continuing environmental programs, such as the Clean Air Program for monitoring and enforcing Clean Air Act regulations. For these grants, Congress directs awards to one or more classes of prospective recipients who meet specific criteria for eligibility. These continuing environmental grants are often awarded on the basis of formulas prescribed by law or agency regulation. In fiscal year 2002, EPA awarded about $3.5 billion in nondiscretionary grants. EPA has primarily awarded these grants to states or other governmental entities. Discretionary grants fund a variety of activities, such as environmental research and training. EPA has the discretion to independently determine the recipients and funding levels for these grants. In fiscal year 2002, EPA awarded about $719 million in discretionary grants. EPA has awarded these grants to nonprofit organizations and universities in addition to governmental entities. EPA administers and oversees grants through the Grants Administration Division within the Office of Grants and Debarment, 12 program offices in headquarters, and EPA’s 10 regional offices. The Grants Administration Division develops overall grants policy. About 102 grant specialists in headquarters and the regions are responsible for overseeing the administration of grants. EPA also has approximately 3,000 project officers within headquarter program offices and the regions. These officers are responsible for overseeing the technical or programmatic aspects of the grants. While grant specialists are dedicated to grants management, EPA staff members who serve as project officers have other primary responsibilities. The grant process has four phases: Pre-award. EPA reviews the application paperwork and makes an award decision. Award. EPA prepares the grant documents and instructs the grantee on technical requirements, and the grantee signs an agreement to comply with all requirements. Post-award. EPA provides technical assistance and oversight; the grantee completes the work, and the project ends. Closeout of the award. The project officer ensures that the project is completed; the grants management office prepares closeout documents and notifies the grantee that the grant is completed. EPA has had persistent problems in managing its grants. In 1996, EPA’s Inspector General testified before Congress that EPA did not fulfill its obligation to properly monitor grants. Acknowledging these problems, EPA identified oversight, including grant closeouts, as a material weakness—a management control weakness that the EPA Administrator determines is significant enough to report to the President and Congress. EPA’s fiscal year 1999 Integrity Act report indicated that this oversight material weakness had been corrected, but the Inspector General testified that the weakness continued. In 2002, the Inspector General and the Office of Management and Budget recommended that EPA, once again, designate grants management as a material weakness. EPA ultimately decided to maintain this issue as an agency-level weakness, which is a lower level of risk than a material weakness. EPA made this decision because it believes its ongoing corrective action efforts will help to resolve outstanding grants management problems. However, in adding EPA’s grants management to GAO’s list of EPA’s major performance and accountability challenges in January 2003, we signaled our concern that EPA has not yet taken action to ensure that it can manage its grants effectively. EPA faces four major, persistent problems in managing its grants. It must resolve these problems in order to improve its grants management. Specifically, EPA has not always awarded its discretionary grants competitively or ensured that it solicits these grants proposals from a large pool of applicants; effectively overseen its grantees’ progress and compliance with the terms managed its grants so that they are effectively used to achieve effectively managed its grants management resources by holding its staff accountable for performing their duties, ensuring that the staff are adequately trained and appropriately allocated, and providing them with adequate management information. Until September 2002, EPA did not have a policy for competing the discretionary grants that might be eligible for competition—about $719 million of its total $4.2 billion in grant funding in fiscal year 2002. Consequently, EPA was not promoting competition. According to EPA’s own internal management reviews and an Inspector General report, EPA did not always compete its discretionary grants when competition might have been warranted. By competitively soliciting grants, EPA would be able to choose the best project at the least cost to the government and is encouraged by the Federal Grant and Cooperative Agreement Act of 1977. EPA can award its discretionary grants noncompetitively; however, it is required by agency guidance to document the reasons for these decisions in a “decision memorandum.” It has not consistently done so, according to EPA’s internal management reviews. Lack of documentation raises questions about the award process and ultimately about whether EPA is providing its grant funds to the best-qualified applicants. Furthermore, EPA has not always engaged in widespread solicitation when it could be beneficial to do so. This type of solicitation would provide greater assurance that EPA receives proposals from a variety of eligible and highly qualified applicants who otherwise may not have known about grant opportunities. According to a 2001 EPA Inspector General report, program officials indicated that widespread solicitation was not necessary because “word gets out” to eligible applicants. Applicants often sent their proposals directly to these program officials who funded them using “uniquely qualified” as the justification for a noncompetitive award. This procedure creates the appearance of preferential treatment by not offering the same opportunities to all potential applicants. In addition, the agency provided incomplete or inconsistent public information on its grant programs in the Catalog of Federal Domestic Assistance and therefore the public and potential applicants may not have been adequately informed of funding opportunities. EPA has faced five persistent problems in overseeing its grants. First, EPA’s internal reviews found that grantees’ progress reports, one of the best sources of information for monitoring recipients, did not include required financial information, and grantees had not always submitted progress reports in a timely fashion. EPA generally requires recipients to submit progress reports to the project officer within a specified time frame. These reports are to include progress to date, any difficulties encountered, a discussion of expenditures compared to work completed, and an explanation of significant discrepancies. Although the recipient is responsible for submitting timely progress reports that discuss the project’s financial status, the project officer is responsible for ensuring that the recipient has done so. Second, project officers and grant specialists did not always document their monitoring activities, which raises questions about the extent of the monitoring they did conduct. According to an EPA internal review, for example, one grants management office developed a form to ensure monitoring activities were completed, but the form was missing from 50 percent of the grant files reviewed, and when the monitoring form was used, it was not always completed. Furthermore, project officers did not always document that they had monitored required key areas, such as ensuring compliance with the terms and conditions of the grant award. Third, EPA has not always ensured that different types of grantees have adequate financial and internal controls to ensure that they use federal funds properly. For example, in 2001, we reported that EPA’s oversight of nonprofit grantees’ costs did not ensure that grant funds were used for costs allowed under guidance published by the Office of Management and Budget. In particular, EPA’s on-site reviews were flawed. The reviews did not include transaction testing to identify expenditures that are not allowed, such as lobbying. We also found that EPA had conducted on-site reviews at only 4 percent of nonprofit grantees who might have had inexperienced staff and inadequate financial and internal controls. In 2000 and 2002, the EPA Inspector General reported that one state’s department of environmental management and two tribes, respectively, lacked adequate financial and internal controls. These problems could have been identified through EPA oversight of grantees. Fourth, EPA has sometimes not ensured that grantees are complying with certain grant regulations, such as those pertaining to grantee procurement and conflict-of-interest. In 2002, the EPA Inspector General reported that EPA did not monitor grantees’ procurements to determine if the grantees were using a competitive process to obtain the best products, at the best price, from the most qualified firms. In 1999 and 2002, the EPA Inspector General reported conflict-of-interest problems because grant recipients had awarded contracts to parties who had assisted them in preparing their grants and therefore had advance knowledge about grantees’ plans to award contracts. Finally, EPA has not fully ensured that recipients are submitting final reports in a timely manner and meeting grant objectives. For example, in 2000, we reported that EPA had not adequately tracked its Science To Achieve Results research grants to ensure their on-time completion. We found that 144 of the nearly 200 grants we reviewed had missed their deadline for submitting final reports, even after some extensions had been given. Also, in 1998, EPA’s Inspector General reported that EPA had not monitored training assistance grants to nonprofit grantees to determine how many students were being trained or how much the training cost. EPA awarded some grants before considering how the results of the grantees’ work would contribute to achieving environmental results. In 2001, we reported that EPA program officials treated EPA’s strategic goals and objectives not as a tool to guide the selection of grants, but rather as a clerical tool for categorizing grants after the funds were already awarded. By assessing the relevance of these grants to EPA’s strategic plan after selecting the grantees, EPA cannot ensure that it is selecting the projects that will best help it accomplish its mission. EPA has also not developed environmental measures and outcomes for all of its grant programs. In 2000, we reported that EPA did not have program criteria to measure the effectiveness of its Science To Achieve Results program. Instead, EPA’s management of the program focused on the procedures and processes of awarding grants. As a result, EPA was uncertain what the program was achieving. Similarly, the Office of Management and Budget recently evaluated four EPA grant programs to assess the programs’ effectiveness at achieving and measuring results. The office found that these four EPA grant programs lacked outcome- based measures—measures that demonstrated the impact of the programs on improving human health and the environment. The office concluded that one of EPA’s major challenges was demonstrating program effectiveness in achieving public health and environmental results. EPA often does not require grantees to submit work plans that explain how a project would achieve measurable environmental results. The grantee work plan describes the project, its objectives, and the method the grantee will use to accomplish the objectives. An effective work plan should, among other things, list the grant’s expected outcomes. The project officer uses the work plan to evaluate performance under the agreement. In 2002, EPA’s Inspector General reported that EPA approved some grantees’ work plans without determining the projects’ long-term human health and environmental outcomes. In fact, for almost half of the 42 grants reviewed, EPA did not even attempt to measure the projects’ outcomes. Instead, EPA funded grants on the basis of work plans that focused on short-term procedural results, such as meetings or conferences. In some cases, it was unclear what the grant funding had accomplished. Both EPA’s internal management reviews and its Inspector General reports have noted several problems in how effectively and efficiently EPA manages its grants staff and other resources. In terms of staff, the agency has not always held accountable its staff responsible for grants management, such as project officers and grant specialists. EPA’s internal management reviews have found that, in some cases, job descriptions or performance standards were inadequate. The Inspector General recently reported similar findings. According to the Inspector General, agency leadership had not always emphasized the importance of project officer duties, nor held project officers accountable for performing certain duties. More specifically, project officer responsibilities were not clearly defined in their performance agreements and position descriptions, and there were no consequences when required duties were not performed. EPA has also not provided all grant staff with the training necessary to properly manage all aspects of grants. EPA’s internal management reviews have noted that some staff who were managing grants had not completed the basic project officer training. Other staff may have completed the basic training but needed additional training to refresh their skills or to become familiar with all of their grants management responsibilities and requirements. For example, in some instances, project officers were not familiar with the five key areas they were to review when monitoring grantees, such as the financial aspects of a grantee’s performance. Internal management reviews also identified other staff-related problems. For example, some internal reviews stated that EPA did not have enough staff to adequately manage the number of grants it awards. Furthermore, other reviews noted that uneven distribution of workload among staff resulted in poor grants management. EPA has also not adequately managed its resources for supporting grant staff. Some EPA internal management reviews noted a lack of resource commitment—time and money—to conduct grant management activities and develop staff. This lack of resources has hampered staff in performing their duties, according to these reviews. For example, some of these reviews noted that grantee oversight, particularly the on-site reviews, was limited by the scarcity of such resources as travel funds. Finally, staff did not always have the information they needed to effectively manage grants. According to several EPA internal management reviews, staff lacked accessible or useable reference material—such as policy and guidance documents, and other information resources, such as reports of grantee expenditures. Additionally, we and others have reported that EPA does not use information from performance evaluations or information systems to better manage its grants. For example, one EPA region did not analyze the results of its own internal surveys, which were designed to assess the effectiveness of its internal grants management operations. In recent years, EPA has taken a series of actions to address two of its key problem areas: grantee oversight and resource management. It has issued several oversight policies, conducted training, and developed a new data system for grants management. However, EPA’s corrective actions have not been consistently successful because of weaknesses in their implementation and insufficient management emphasis. Between 1998 and 2002, EPA issued three policies to improve its oversight of its grant recipients. These policies have tried to improve oversight by establishing, expanding, and refining the activities of EPA staff involved in managing grants. EPA took additional actions to reduce the backlog of grants needing closeout. EPA’s first policy, issued in May 1998, required grants management office staff to monitor the financial progress and administrative compliance of grant recipients’ activities. The policy also required the staff to conduct site visits or desk reviews to review the adequacy of some grantees’ administrative and financial systems for managing their grants. Furthermore, the grants management offices had to submit biennial monitoring plans, which included their proposed monitoring activities. Finally, the policy included suggested criteria for selecting grantees to be reviewed and guidelines for how to conduct the oversight activities. EPA’s second policy, issued in April 1999, added oversight responsibilities for program staff in headquarters and the regions. The policy required headquarters and regional program offices to submit annual plans outlining their proposed monitoring activities. The policy also suggested activities to be included in these plans, such as monitoring grantees’ progress of work, documenting their efforts, and closing out grants in a timely manner. EPA’s third policy, issued in February 2002, further refined its oversight requirements by having grant management and program offices conduct in-depth monitoring on at least 5 to 10 percent of their grant recipients. The grant management offices had to assess grantees’ financial and administrative capacity, while the program offices had to assess the grantees’ activities in five key areas, such as progress of work and financial expenditures. Furthermore, the grant management offices, as well as regional and headquarters program offices, had to report quarterly on their in-depth monitoring activities. Additionally, the policy committed the Office of Grants and Debarment to the development of a database, which, according to an EPA official, the grants management offices would use to store the results of their in-depth monitoring activities. Finally, the policy included suggested guidance for how to conduct program office reviews. One of the final steps in monitoring is “closing out” grants to ensure that the project was completed and that any remaining funds are recovered. In 1996, EPA had a backlog of over 19,000 grants needing closeout. To reduce such backlogs and prevent future backlogs, EPA, among other things, developed specific procedures for closing out nonconstruction grants and identified a strategy for closing construction grants that included assessing impediments to closing out grants. In terms of resource management, EPA provided grants management training for its staff and some grant recipients. It developed and periodically updated a training manual for project officers. EPA also required project officers to attend a 3-day training course based on this manual and periodically take a refresher course. EPA developed a database to certify that project officers had completed this training. According to an EPA official, grants specialists have also received some training. Finally, EPA conducted a 1-day grants management training course for nonprofit grantees and pilot-tested a standard training course for grants specialists. Finally, EPA has taken steps to improve another critical resource—its primary data system for managing grants. In 1997, it began developing the Integrated Grants Management System (IGMS), which, according to an EPA official, will allow electronic management throughout the life of the grant. EPA believes IGMS could help resolve some of the long-standing problems in grants management by implementing controls to prevent certain documents from being submitted without required elements and providing electronic reminders of when certain activities or documents are due. Additionally, EPA designed the system to reduce the potential for data entry errors. According to an EPA official, IGMS is being developed through modules. In 2001, EPA began implementing the system to control the application and award phases of a grant. Using IGMS, EPA will be able to review the grantee’s application, prepare and review EPA’s documents, and approve the award electronically. In April 2003, EPA will begin using the post- award module of IGMS. This module will allow project officers to enter project milestones into the system, communicate with other staff involved in overseeing grants, receive electronic reports from grantees, and initiate closeout activities electronically. EPA expects that all staff will be using IGMS to electronically manage grants by September 2004. EPA continues to face grant management problems, despite the corrective actions it has taken to date. In 2002, EPA’s Inspector General reported that EPA’s corrective actions were not effectively implemented and specifically, for monitoring, found, among other things, inconsistent performance of monitoring responsibilities, inadequate preparation of monitoring plans, incomplete submission of quarterly compliance reports, and considerable differences among the programs and the regions in the number of on-site evaluations they conducted. As part of our ongoing review, we are assessing EPA’s corrective actions for monitoring and have found mixed results. On the one hand, we have seen some problems. For example, we identified two weaknesses in the database EPA created to store the results of its in-depth reviews. First, only grant management offices—not program offices—had to enter the results of their reviews into this database, and according to an EPA official familiar with the database, not all of them did so. Second, according to the same official, EPA did not design the database so that it could analyze the results of the in-depth reviews to make management improvements. On the other hand, however, we found that EPA’s corrective actions increased the oversight of its grant recipients. In 2002, EPA reported that it had conducted 578 on-site reviews, and 629 desk reviews, which is an increase in both the number of on-site reviews and the number of reviews some offices conducted. In addition, EPA’s 2002 internal reviews indicated some improvements in oversight compared with the prior year’s performance. On another positive note, EPA has made improvements in closing out grants. In 1998, we reported that in some instances EPA’s corrective actions to close out grants were not initially successful. For example, we had found that strategies to reduce the closeout backlog were not always consistently implemented or failed to close out a considerable number of grants, despite making some progress. However, EPA had successfully resolved its backlog problem by 2002. As a result, EPA has been able to eliminate this backlog as a material weakness and receive better assurance that grant commitments have been met. With respect to resource management, EPA implemented corrective actions to improve training, but these actions have not been fully successful. For training, the EPA Inspector General reported that the agency did not have adequate internal controls in place to ensure that project officers were in compliance with the training requirements. Specifically, one region did not track the names and dates of project officers who received training, the agencywide database on training for project officers was inaccurate and had limited functionality, and the on- line refresher course did not have the controls necessary to prevent staff from obtaining false certifications. In addition to the weaknesses in the corrective actions for specific problem areas, the EPA Inspector General found two other problems. First, the agency’s internal grant management reviews did not consistently examine issues to identify and address systemic weaknesses, did not adequately identify the causes of specific weaknesses or how the proposed corrective actions would remedy the identified weakness, and were not sufficiently comprehensive. Furthermore, the Grants Administration Division did not assess the results of these reviews to make management improvements. Second, EPA’s senior resource officials did not ensure compliance with EPA policies or sufficiently emphasize grantee oversight. The Inspector General concluded that the lack of emphasis contributed to the identified implementation weaknesses. In response to this assertion, senior resource officials stated that monitoring is affected by the limited availability of resources, and that they lack control over how regional program offices set priorities. The Inspector General pointed out that these officials are responsible for providing adequate resources; however, none of the officials interviewed had conducted assessments to determine whether they had adequate resources. EPA has recently issued new policies to address two of the key problems we have identified—competition and oversight—and developed a 5-year plan to address its long-standing grants management problems. In September 2002, EPA issued a policy to promote competition in awarding grants by requiring that certain grants be competed. These grants may be awarded noncompetitively only if certain criteria are met, in which case, a detailed justification must be provided. The new policy also created a senior-level advocate for grants competition to oversee the implementation of the policy. In December 2002, EPA also issued a new oversight policy that increases the amount of in-depth monitoring—desk reviews and on-site reviews—that EPA conducts of grantees; mandating that all EPA units enter compliance activities into a database; and requiring transaction testing for unallowable expenditures, such as lobbying, during on-site evaluations reviews. In April 2003, EPA issued a 5-year Grants Management Plan. EPA’s Assistant Administrator for Administration and Resources Management has called implementation of this plan the most critical part of EPA’s grants management oversight efforts. The grants management plan has five goals and accompanying objectives: Promote competition in the award of grants by identifying funding priorities, encouraging a large and diverse group of applicants, promoting the importance of competition within the agency, and providing adequate support for the grant competition advocate. Strengthen EPA’s oversight of grants by improving internal reviews of EPA offices, improving and expanding reviews of EPA grant recipients, developing approaches to prevent or limit grants management weaknesses, establishing clear lines of accountability for grants oversight, and providing high-level coordination, planning, and priority setting. Support identifying and achieving environmental outcomes by including expected environmental outcomes and performance measures in grant workplans, and improving the reporting on progress made in achieving environmental outcomes. Enhance the skills of EPA personnel involved in grants management by updating training materials and courses and improving delivery of training to project officers and grants specialists. Leverage technology to improve program performance by, for example, enhancing and expanding information systems that support grants management and oversight. Although we have not fully assessed EPA’s new policies and grants management plan, I would like to make a few preliminary observations on these recent actions based on our ongoing work. Specifically, EPA’s plan: Recognizes the need for greater involvement of senior officials in ensuring effective grants management throughout the agency. The plan calls for a senior-level grants management council to provide high-level coordination, planning, and priority-setting for grants management. Appears to be comprehensive in that it addresses the four major management problems—competitive grantee selection, oversight, environmental results, and resources—that we identified in our ongoing work. Previous EPA efforts did not address all these problems, nor did they coordinate corrective actions, as this plan proposes. EPA’s plan ties together recent efforts, such as the new policies and ongoing efforts in staff and resource management, and proposes additional efforts to resolve its major grants management problems. Identifies the objectives, milestones, and resources needed to help ensure that the plan’s goals are achieved. Furthermore, EPA is developing an annual companion plan that will outline specific tasks for each goal and objective, identify the person responsible for completing the task, and set an expected completion date. Begins to build accountability into grants management by establishing performance measures for each of the plan’s five goals. Each performance measure establishes a baseline from which to measure progress and target dates for achieving results. For example, as of September 2002, 24 percent of new grants to nonprofit recipients that are subject to the competition policy were competed—EPA’s target is to increase the percentage of these competed grants to 30 percent in 2003, 55 percent in 2004, and 75 percent in 2005. The plan further builds accountability by identifying the need for performance standards for project officers and grants specialists that address grant management responsibilities. Although these actions appear promising, EPA has a long history of undertaking initiatives to improve grants management that have not solved its problems. If the future is to be different from the past, EPA must work aggressively to implement its new policies and its ambitious plan through a sustained, coordinated effort. It will be particularly important for all agency officials involved in managing grants to be committed to and held accountable for achieving the plan’s goals and objectives. Mr. Chairman, this concludes my testimony. I would be happy to answer any questions that you or Members of the Subcommittee may have. For further information, please contact John B. Stephenson at (202) 512- 3841. Individuals making key contributions to this testimony were Andrea Wamstad Brown, Christopher Murray, Paul Schearf, Rebecca Shea, Carol Herrnstadt Shulman, Bruce Skud, and Amy Webbink. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the years, EPA has had persistent problems in managing its grants. Grants constituted one-half of the agency's annual budget, or about $4.2 billion in fiscal year 2002. EPA uses grants to implement its programs to protect human health and the environment and awards them to over 3,300 recipients, including state and local governments, tribes, universities, and nonprofit organizations. EPA's ability to efficiently and effectively accomplish its mission largely depends on how well it manages its grant resources and builds in accountability. Since 1996, GAO and EPA's Office of Inspector General have repeatedly reported on EPA's problems in managing its grants. Because these problems have persisted, in January 2003, GAO cited grants management as a major management challenge for EPA. GAO is currently reviewing EPA's efforts to improve grants management at the request of the Chairman of the House Committee on Transportation and Infrastructure and Representative Anne Northup. For this testimony GAO is reporting on results of its previously issued reports and on the grants problems EPA faces, past actions to address these problems, and recently issued EPA policies and a 5-year grants management plan to address its long-standing grants management problems. EPA faces four key problems in managing its grants: (1) selecting the most qualified grant recipients from a large applicant pool, (2) effectively overseeing grantees throughout the life of the grant, (3) measuring the results of the grantees' work, and (4) effectively managing its grants staff and resources. EPA must resolve these problems in order to improve its management of grants. In recent years, EPA has taken a series of actions to address two of its key problem areas: grantee oversight and resource management. EPA actions include issuing several oversight policies, conducting training, and developing a new data system for grants management. However, these past actions were not consistently successful in resolving grants management problems because of weaknesses in implementation and insufficient management emphasis. For example, between 1998 and 2002, EPA issued three policies designed to improve oversight of grantees, but EPA staff did not consistently carry them out. Late in 2002, EPA launched new efforts to address some of its grants management problems. In September 2002, EPA, for the first time, issued a policy to promote competition in awarding grants. In December 2002, it issued a new policy designed to better ensure effective grant oversight. Finally, in April 2003, EPA issued a 5-year grants management plan to address its long-standing grants management problems. GAO is still reviewing these new efforts. Although EPA's recent actions seem promising, the agency has a long history of undertaking initiatives to improve grants management that have not solved its problems. If the future is to be different from the past, EPA must work to aggressively implement its new policies and its ambitious 5-year plan through a sustained, coordinated effort. It will be particularly important for all agency officials involved in managing grants to be committed to and held accountable for achieving the plan's goals and objectives.
The medical mission of DOD is to provide and maintain readiness, medical services, and support to the armed forces during military operations and to provide medical services and support to members of the armed forces, their family members, retirees and their families, and eligible survivors of deceased active and retired military personnel. DOD’s health care program provides medical services such as surgery and inpatient care, pharmacy services, and mental health care to eligible beneficiaries. This care is delivered through its military hospitals and clinics, known as MTFs, or from contracted civilian-provided care. However, if an eligible beneficiary has commercial insurance and care is provided by the MTF, the government is authorized to bill the insurance company under the Third Party Collections Program established in Public Law 99-272, as amended by Public Law 101-510 (10 U.S.C. 1095). Currently, according to DOD records, over 8 million active duty and retired military personnel along with their dependents and survivors are eligible for health care benefits from the military health care system. The three medical facilities in our engagement are also DOD medical teaching facilities. Eisenhower trains residents in both surgical and primary care specialties with emphasis on research and state-of-the-art specialty care. Portsmouth is the oldest hospital in the U.S. Navy having provided continuous care since July 1830. It has a medical education program offering internships and residency training programs in medicine, dentistry, psychology, and pastoral care. It is one of three teaching hospitals in the Navy with residency programs in 13 specialty areas. Wilford Hall is the Air Force’s largest medical facility. It focuses on military readiness, provides a worldwide referral center for military personnel and their dependents, and provides trauma and emergency medical care for the San Antonio and south Texas civilian communities. It is also the Air Force’s foremost provider of medical education, providing the Air Force with 65 percent of its physician specialists and 85 percent of its dental specialists. Appendix II provides more background information about the military facilities. The following five subsections of this report outline opportunities for the three MTFs covered by this review to improve their financial or operating controls and to, in the process, reduce federal costs. DOD auditors’ and our work has also reported on a number of these issues at some of the same facilities and recommended improvements. As discussed in appendix I and under the following sections, our work, while not designed to ascertain the extent of each problem, indicates the existence of systemic problems for each of the five areas we reviewed. Although the MTFs are authorized to bill insurance companies under the Third Party Collections program, millions of dollars are not being collected each year because patient medical records are incomplete, as is the identification and billing of reimbursable care. Patients were not systematically asked to provide current insurance information, thereby hindering the ability to identify all billable care. Even when patient insurance information was obtained, the staff often failed to send a bill to the third party insurer or sent the bill late. Once a bill is successfully processed, collections from third party insurance companies represent 2 percent to 5 percent of the facilities’ operating costs each year. The MTF Uniform Business Office Manual, DOD 6010.15-M, dated April 1997, prescribes procedures for third party collection activities such as the identification of beneficiaries who have other health insurance. It also states that the staff shall obtain written certification from beneficiaries at the time of each inpatient admission or outpatient visit if a certification is not on file or if it has not been updated within 12 months. However, our observations of patient reception at several clinics at the three medical facilities showed that staffs were not systematically obtaining and updating patient insurance information and rarely asked outpatients about third party insurance coverage. In addition, the required DOD Form 2569 used to document third party insurance coverage was often not completed and maintained for either inpatients or outpatients in hospital files or databases. Having a completed form is important because it (1) documents the existence and type of coverage, (2) is used to update insurance data in the automated medical management information system, and (3) authorizes the medical facility to bill insurance companies on behalf of the beneficiary. Our tests of third party insurance documentation for 1 day during each quarter of fiscal year 2001 showed the following results. At Eisenhower, only 9 of 60 patients, primarily inpatients, selected had a current completed DOD Form 2569. After our visit, Eisenhower’s staff began monitoring the admissions process in an effort to improve the completions of DOD Form 2569 by all non-active-duty inpatients and assigned staff members to ask about insurance while patients wait to receive pharmaceuticals. Portsmouth uses an internally developed form to document if patients have private health insurance. For 40 of 60 inpatients selected, Portsmouth had insurance information in the patient billing files. Wilford Hall had a completed, current DOD Form 2569 for 41 of the 69 patients selected. Wilford Hall has for some time dedicated personnel on a part-time basis to assist patients in completing the DOD Form 2569 at one of its clinics. Without completed insurance information forms, recording and maintaining accurate, complete, up-to-date, and verifiable insurance information in facilities’ billing systems is not possible. We found instances where the patient record in the automated medical information system contained out-of-date or no insurance coverage information, making system reports incomplete and inaccurate. Reasons given by facility officials for these problems were mostly attributed to staffing constraints and shortages. Consequently, there was little assurance that all reimbursable care was being identified for billing. In a recent report, the Air Force Audit Agency reported the same condition—insurance information for inpatients was not being obtained and entered into the automated medical information system. For over 70 percent of the non-active-duty inpatient population at 14 MTFs they reviewed, no insurance data were recorded in the system, resulting in lost collections. Air Force auditors sampled the inpatients shown in the system as not having insurance data and determined that those who actually had unrecorded third party coverage had received care valued at $113,330. Projected to the entire population over a 6-year period, Air Force auditors estimated that $14.4 million could have been billed to third party insurers at the 14 Air Force MTFs. Our tests of billings at the three facilities revealed that even when patient insurance information was available, the staff often did not send a bill. As shown in table 1, about one-third of our nonrepresentative selection of 240 instances of treatment that should have been billed to a third party insurer were not billed. Billings were generally better for inpatient admissions, while the billing rates for outpatient visits and pharmacy benefits were much lower. More specifically, our testing of 48 inpatient admissions identified only 3 instances when insurers were not billed. In addition to the 38 outpatient visits not billed, our selection also disclosed patients with third party insurance who used the facilities frequently, but whose insurance had never been billed for any care provided during fiscal year 2001. While all facilities had pharmacy billing problems, the situation was most serious at Wilford Hall, which reported only billing for about $158,000 in pharmacy charges during fiscal year 2001. After we brought this to the attention of Wilford Hall’s management, it hired a contractor to supplement its billing staff. As a result, by June 30, 2002, Wilford Hall had billed almost $800,000 in pharmacy charges during the first 9 months of fiscal year 2002, of which $650,000 was billed during the third quarter of the year. Lost forms, clinical data coding or input problems, lack of staff to handle high workloads, missed billings due to clerical oversight, and a complicated multistep billing process were explanations provided for not billing for reimbursable care. The Air Force Audit Agency also recently reported that military facilities were not effectively recovering the cost of pharmaceuticals provided to patients with private health insurance. Thirteen facilities were not adequately identifying patients with third party insurance, and even when sufficient data were available, billing was not always done. Air Force auditors projected that increased management emphasis in this area would generate increased billings of about $114 million for the 13 Air Force MTFs over a 6-year period. Wilford Hall was one of the facilities included in the Air Force Audit Agency review. When billing for third party insurance occurred, it was often delayed. DOD standard criteria call for facilities to bill for admissions within 10 business days following completion of the medical record and within 7 business days for outpatient visits. In evaluating the timeliness of billing, we used a more liberal standard of 30 days after treatment for billing admissions and 90 days for outpatients and pharmaceuticals dispensed. Even then, the military facilities still did not bill within those extended time frames in about half the cases, as shown in table 2. Promptly invoicing insurers for care provided is a sound business practice and should result in improved cash flow for the government. Reasons for delayed billings provided by personnel were staffing shortages, high workloads, and coding delays. Also, officials at all three MTFs cited the current cumbersome billing process, which requires a high degree of manual intervention, as a cause for not billing promptly. Compared to appropriated funds, third party collections represented a relatively small revenue source for the MTFs but could actually be larger. In fiscal year 2001, Eisenhower collected $4.6 million for current and past years’ billings, which was about 5 percent of its facility costs, and Portsmouth and Wilford Hall collected about $5.1 million and $4.2 million, respectively, or about 2 percent of their respective facility costs. Collections were derived primarily from admissions and, to a lesser extent, from outpatient care, which includes recoveries for prescription drugs, emergency medical care, and clinical visits. Management at the three facilities did not have the information needed to evaluate the cost of drugs turned in under the pharmaceutical return goods program. Specifically, pharmacy personnel did not perform inventories of non-narcotic expired drugs being returned to the manufacturers for reuse or destruction, which would help management verify the level and types of drugs being turned in and the accuracy of any credits received. The lack of a review of expired drugs hampers the pharmacy personnel’s ability to identify reasons for any unusual trends associated with the drugs turned in and any adjustments needed to current inventory levels. Pharmacy personnel at the Portsmouth and Wilford Hall facilities did not inventory the non-narcotic drugs turned in for pickup by their respective pharmaceutical return goods contractor. This contractor collects recalled, expired, or deteriorated drugs for a fee and returns them to their respective manufacturers for possible future credits. The contractor also provides each facility with a detailed report of the items returned and credits received. However, the two military facilities cannot verify the accuracy of credits received without having performed their own inventories of the returned items since they do not keep perpetual inventories of non-narcotic drugs, and they did not have records of what they turned in to the contractor. As a result, the hospitals were relying solely on the contractor to identify the actual type and amount of drugs returned to the drugs’ manufacturers. Pharmacy officials at Wilford Hall told us that it was not cost-effective to track non-narcotic expired drugs, but did not provide any analysis or documentation to support this assertion. However, we contacted a pharmacy operations official at a large commercial health care company who stated that it was the company’s practice to maintain an inventory of returned drugs by assigning a tracking number for each returned item so the credit received can be reconciled to its related tracking number. Conversely, Eisenhower pharmacy personnel recently started inventorying the turned in non-narcotic drugs in response to a January 2002 Army Audit Agency report of its pharmaceutical management practices. In this report, Army auditors reported that pharmacy personnel had not established a method for tracking the amount of drugs returned to the manufacturers to make sure related credits were received. Further, the hospitals did not use the detailed contractor reports to perform a “returned drug” analysis. Therefore, pharmacy personnel are unable to efficiently monitor drug usage or to determine whether unusual trends are occurring and if the inventory levels in the pharmacies are appropriate. Drugs have defined shelf lives, and there is value added in managing the inventories to minimize the levels of expired drugs. A periodic evaluation of expired and/or deteriorated drugs being turned in throughout the year may reveal certain drugs being turned in at consistently high levels and thus indicate a need to adjust the inventory levels to better align them with usage levels. If management reviewed actual performance data and took necessary corrective action to optimize inventory levels, the cost of pharmaceutical operations could be reduced. For example, in July 2001, Portsmouth returned 2,000 tablets of Zocor, a cholesterol-lowering drug, for destruction and received no credit. Since this drug costs the pharmacy about $.50 per tablet, the government lost $1,000 on the purchase of this unused drug. Although internal control standards require agencies to establish physical control to secure and safeguard vulnerable assets, internal controls over property at Wilford Hall and Portsmouth were ineffective and were only partially effective at Eisenhower due to inaccurate personal property data relative to the existence of these assets. We also found inaccuracies in the areas of completeness and a lack of support for the costs and dates of acquisition of these assets. More specifically, our tests of personal property found examples of items on the property records that could not be located and items that were incorrectly recorded or were not recorded in the property records. In addition, many items in the personal property records had little or no documentation available to support their acquisition values or dates, and the resolution of items discovered missing during physical inventories was significantly delayed. We statistically sampled 100 property items at each facility, attempted to physically locate the items, and compared the facility-assigned bar code and manufacturer’s serial number on each item with that shown in the record. Based on the results of tests of existence of personal property items at each location, we assessed the overall effectiveness of each facility’s property internal controls. To determine effectiveness, we established three categories of error rates: below 5 percent error was considered effective, from 5 to 10 percent error was considered partially effective, and above 10 percent error was considered ineffective. As such, we estimate that at least 11 percent and 23 percent of the property items could not be found or had serial numbers that did not match those recorded on the books at Wilford Hall and Portsmouth, respectively. Since these percentages are greater than 10 percent, we assessed the internal control activities as ineffective at these two locations. At Eisenhower, we estimate, with 95 percent confidence, that at most 9 percent of the property items could not be found or had serial numbers that did not match those recorded on the books. Since this percentage falls between 5 and 10 percent, we assessed the internal control activities at Eisenhower as partially effective. Additionally, we also estimated the specific existence error rates at each location. Based on our review, we estimate that the percentage of items that facility officials would not be able to find, or would find with serial numbers different than those listed in the property records, would be 31 percent at Portsmouth, 4 percent at Eisenhower, and 17 percent at Wilford Hall. Almost all of the personal property items that could not be located were lower priced (under $5,000) or pilferable items that had been recorded as accountable assets. Examples of these items included a personal digital assistant (i.e., a Palm PilotTM); a cellular telephone; computer monitors; color printers; a handheld radio; and various pieces of medical equipment such as a stretcher, electric beds, and intravenous pumps. Officials stated that many of the pieces of medical equipment are portable and may move from one location to another with patients. However, for the office equipment items, no explanation was provided as to where they could be or what had happened to them. Property record errors were not limited to low dollar value items. For example, Wilford Hall officials told us that a $1 million magnetic resonance imaging scanner was returned to the contractor in September 2001. However, the scanner was still on Wilford Hall’s records at the time our sample items were selected in October 2001, and not removed from the MTF’s records until November 2001. In addition to the sample items that could not be located, serial number errors where the facility-assigned bar code matched but the serial number did not were prevalent in property of all dollar values. Appendix III summarizes the results of our personal property existence testing. Tests of property items traced from their physical locations to the property records showed similar types of errors. We found instances where the serial numbers in the property records did not match the serial numbers on the personal property, although the bar codes did match. In addition, other items such as a laptop computer, a Sony monitor, and a sterilizer were not recorded in the property records. Recording these items accurately in the property records is an important step to improving accountability and financial control over these assets and, along with periodic inventory, preventing theft or improper use of government property. In addition to the weaknesses found in the physical controls over personal property assets, the three facilities provided little or no independent documentation to adequately support the cost or acquisition dates of their personal property items. Eisenhower and Wilford Hall had no supporting documentation readily available for any of the items in the sample, while Portsmouth’s property management staff mostly provided internally generated purchase orders and requests in support of the estimated cost and acquisition dates of many personal property items. Based on our review, we estimate that Portsmouth would not be able to provide independent documentation for 93 percent of the items in the property records. Internal control standards for the federal government require that all transactions be clearly and completely documented, and that this documentation be readily available for examination. We previously reported that DOD guidance on proper documentation and retention was inadequate. The documentation problems we found suggest that these issues still exist. Taking a periodic physical inventory of personal property and resolving discrepancies in a timely manner are key internal control activities for property accountability. However, although all three facilities take periodic physical inventories, Portsmouth and Wilford Hall had long delays in researching personal property items not located during their physical inventories and finalizing inventory results, weakening personal property accountability. At Portsmouth and Wilford Hall, missing inventory items were not promptly researched as required by the DOD Financial Management Regulation. This regulation requires that an inquiry be initiated immediately after discovery of the loss, damage, or destruction of government property and that a “Financial Liability Investigation of Property Loss” form be completed. At Wilford Hall, research was still ongoing in May 2002 for items missing during the May 2001 annual inventory. Further, neither of these locations had completed their 2001 physical inventories as of May 2002, indicating a lack of management emphasis on the importance of personal property accountability. These delays make it more difficult to research and investigate the cause of the loss of the personal property items, and lessen the effectiveness of the physical inventory process as a key internal control activity. Purchase card program internal control weaknesses make medical facilities vulnerable to fraudulent and abusive purchases and place the government at financial risk for the purchases. As a result, the ability to buy items or services that may be (1) potentially fraudulent, (2) improper, and (3) abusive or questionable increases. These purchase card weaknesses are similar to those identified in our previous work at two Navy sites in San Diego, California, and at five Army sites (one being Eisenhower), both of which found a weak control environment and ineffective internal controls, which allowed potentially fraudulent, improper, and abusive purchases. The work at Eisenhower is the result of statistical sampling and data mining, while only data mining was used to review purchase card transactions at Portsmouth and Wilford Hall. Because we did not select statistical samples at these two locations, we cannot conclude as to the effectiveness of key internal controls. However, our tests indicated the same type of control breakdowns as seen in other work, indicating that these facilities could have similar problems. A potentially fraudulent purchase by a cardholder is defined as one made that is unauthorized and intended for personal use. Potentially fraudulent purchases can also result from compromised accounts in which a purchase card or account number is stolen and used by someone other than the cardholder to make a potentially fraudulent purchase. At Eisenhower, an Army investigation found that a military cardholder defrauded the government of $30,000 with purchases of a computer, purses, rings, and clothing for personal use and as a result had been sentenced to 18 months in prison. The cardholder took advantage of a situation wherein the cardholder’s approving official was on temporary duty for several months. The cardholder believed that the alternate approving official would certify the statement for payment without reviewing the transactions or their documentation. These fraudulent transactions were not discovered until the resource manager who monitored the unit’s budget noticed a large increase in spending by the cardholder. The cardholder had destroyed all documentation for the 3-month period during which these transactions took place. These fraudulent transactions might not have occurred if the cardholder had known that the approving official would review the transactions. At a minimum, prompt approving official review would have detected the fraudulent transactions. Although our data mining tests do not allow us to determine the extent of improper purchases at the three locations, we did find instances of two types of improper purchases—split purchases and purchases from nonmandatory sources. Split purchases occur when a cardholder divides a single purchase into more than one transaction to avoid the requirement to obtain competitive bids for purchases over the $2,500 micropurchase threshold or to avoid other established credit limits as prohibited by the Federal Acquisition Regulation. Of the 17 sets of transactions reviewed at Wilford Hall that appeared to be split purchases, officials could not provide invoices or other third party documentation for 15 of these sets of transactions to determine whether they were actual split purchases. However, a cardholder and another official acknowledged that two of the selected transactions were split purchases. For example, one transaction set contained 19 orders that were placed to the same vendor on the same day. These 19 orders totaled over $7,200. Officials agreed that this set of transactions was a split purchase because the buyer knew all the requirements and probably knew the total was above the threshold and still placed the orders at one time. Another type of improper purchase occurs when cardholders do not buy from mandatory sources of supply. Various laws and regulations require the purchase of certain products from designated sources such as the Javits-Wagner-O’Day Act (JWOD) vendors. The program created by this act is a mandatory source of supply for all federal entities. The JWOD program generates jobs and training for Americans who are blind or have severe disabilities by requiring federal agencies to purchase supplies and services furnished by nonprofit agencies, such as the National Industries for the Blind and the National Institute for the Severely Handicapped. At Portsmouth and Wilford Hall, items such as day planner refills, other miscellaneous office supplies, and plastic utensils were bought from a commercial source when they, or substantially similar products, could have been bought from JWOD vendors. Further, Portsmouth and Wilford Hall did not have documentation to show that the cardholders had checked item availability from these vendors before purchasing them elsewhere. Each location had examples of either abusive or questionable purchase card transactions. Abusive transactions are those that were authorized, but the items purchased were at an excessive cost or for a questionable government need or both. Abuse can also be viewed as when the conduct of a government organization, program, activity, or function falls short of societal expectations of prudent behavior. One example of an abusive transaction was the purchase of a $650 Sony digital camera at Wilford Hall that was justified as needed to “take photos for Christmas party and other events put on for squadron morale boosters,” while the digital camera bought by the pass office to update its badge security system only cost $350. The purchase of the more expensive model for the reasons given was excessive, and a more modest camera could have been bought. Questionable transactions are those that appear to be improper or abusive but for which there is insufficient documentation to conclude either. Many of the transactions we selected in the data mining were without supporting documentation, which makes a firm determination of their legitimacy impossible without a thorough investigation. Also, we have found that the lack of documentation can be an indicator of fraud, as in the $30,000 Eisenhower fraud case. Questionable purchases often do not easily fit within generic governmentwide guidelines on purchases that are acceptable for the purchase card program. Because they tend to raise questions about their reasonableness and subject the activity to criticism, they require a higher level of prepurchase review and documentation than other purchases. An example of a questionable transaction involved the purchase of food by a psychiatric clinic at Portsmouth. Hospital officials stated that the planning of meals, purchasing of food at local groceries, and its subsequent preparation is a commonly prescribed therapy for certain patients, and the hospital pays for the food. While this may be true, there was no advance approval of this transaction and military facility officials provided no other documentation authorizing this activity as legitimate. Because there are limitations on the purchase of food with a government purchase card, it seems reasonable to expect that each of these particular transactions be closely reviewed and approved and be well documented and justified before the purchase, not after. In addition to fraudulent, improper, and abusive or questionable purchases, the medical facilities lacked documentation of (1) advance approval, (2) independent receiving, and (3) invoices or other means to independently verify both the quantity and price of purchases for the items we reviewed. Many of the government purchase card transactions we reviewed at these facilities did not have documentation of advance approval. At Eisenhower, we estimated that 60 percent of the items purchased with the government purchase card lacked advance approval. Portsmouth lacked advance approval documentation for 40 of the 50 nonrepresentatively selected transactions we reviewed, but officials claim that all items purchased and recorded in their Defense Medical Logistics Standard Support (DMLSS) system have been through the approval process. However, once an item is approved and recorded in this system, subsequent reorders of the same item do not need any other approval. In other words, after the initial order, there is no separation of duties between the approving and ordering official. At Wilford Hall, which lacked advance approval documentation for 14 of the 50 nonrepresentatively selected transactions reviewed, several of the transactions were purchases of briefcases for war reserves appearing on project allowance lists. Officials said that as long as the items were on an allowance list, then they were authorized to buy them without any other necessary paperwork. Our selected items were on these approved project allowance lists, and no other advance approval documents with supervisor review and signature were available. Both the automated DMLSS system and war reserve approval processes do not prevent cardholders from buying items, such as these briefcases, for possible personal use. Leaving a cardholder solely responsible for a procurement action without some type of documented approval puts the cardholder at risk and makes the government inappropriately vulnerable. A segregation of duties so that someone other than the cardholder is involved in the purchase improves the likelihood that both the cardholder and the government are protected from fraud, waste, and abuse. Advance approval is an appropriate internal control activity and can be achieved without requiring the formal contracting procedures that could impede timely purchases and increase costs. For example, blanket approval for routine purchases within set dollar limits involves minimal cost, but provides reasonable control. For nonroutine purchases involving significant expenditures, advance approval, even through informal processes, appears to be an important internal control activity. The wide range of items lacking documentation of independent receiving could be the result of the type of documentation maintained at the facilities. Independent receiving by someone other than the cardholder is a basic internal control activity that provides additional assurance that purchased items are not acquired for personal use and that the purchased items come into the possession of the government. We estimated that 71 percent of the transactions at Eisenhower lacked documentation of independent receiving. Of the 50 nonrepresentatively selected transactions reviewed at each of the other two locations, 12 from Wilford Hall and 2 from Portsmouth lacked documentation of independent receipt. Portsmouth’s medical logistics system, which was different from those in place at Eisenhower and Wilford Hall, allows the person receiving the item to document the receipt directly into the system. This process makes the receipt documentation more readily available than paper files since it tracks the name and date of receipt. For 48 of the 50 items we reviewed, system records showed a different person ordering and receiving the goods. However, we did not test the system’s access controls over the segregation of the ordering and receiving functions. Having receipt documentation recorded directly in the system is efficient and acceptable, but only if the system controls are adequate. A large number of the transactions reviewed did not have independent documentation such as an invoice available to verify both quantity and price information. We estimated that 26 percent of the transactions at Eisenhower lacked an invoice or other independent documentation. Of the 50 nonrepresentatively selected items reviewed at the other two locations, 20 and 18 lacked invoices or other independent documentation at Wilford Hall and Portsmouth, respectively. Internal control standards require that transactions be clearly documented and that support be readily available for examination. A valid invoice to show what was purchased and the price paid is a basic transaction document, and a missing invoice is an indicator of potential fraud, as was demonstrated in the $30,000 fraud case at Eisenhower. Without this independent documentation, supervisors and management cannot be certain that the items purchased are appropriate and that government funds were properly used. For example, some transactions had no documentation supporting the description, quantity, or price for items or services bought from vendors such as a jewelry store, an automobile audio accessory store, a dry cleaner, a camera store, and a carpet retailer. While officials told us that these transactions were for valid government reasons, they could not provide any documentation supporting the purchases. Without a vendor invoice, a thorough review is necessary to determine whether the transaction was proper or potentially fraudulent, improper, or abusive. Also, independent receiving cannot confirm that all purchased items were received if no invoice or other documentation supporting the quantity is available. Collectively, the weaknesses found and their effects as demonstrated by our work indicate the existence of financial management problems at the three MTFs. Because selected internal controls at the facilities have not been effectively implemented, management at these facilities does not have reasonable assurance that only eligible patients are receiving care, the government has been properly reimbursed for care from third party insurers, personal property and expired drugs can be accounted for, and purchase cards are used properly. The same issues and recommendations identified in our other work related to purchase card usage are also applicable to the MTFs. As a result of these control weaknesses, millions of dollars that could be used for patient care may be unnecessarily spent for ineligible patients, unused pharmaceuticals, or unneeded purchases. Because having sound financial and management practices affects the ability of program directors and managers to make better decisions and achieve results, we recommend that the Under Secretary of Defense for Personnel and Readiness and the military services’ Surgeons General, in conjunction with the senior management at the three MTFs, as appropriate, develop a strategy to make short-term and long-term improvements in data quality in the automated eligibility, cost, and clinical health care systems; develop and utilize analytical tools for facilitating the identification of erroneous records in the eligibility, cost, and clinical health care systems such as comparisons between SSA records and facility automated medical management records; reiterate through correspondence with MTF personnel the importance completing or updating the DOD Form 2569, as required, to document whether each health care beneficiary has third party insurance; entering patient insurance coverage information into the automated medical information system so that more complete and accurate reports can be generated to better identify reimbursable care for billing; billing third party insurance carriers promptly for admissions, outpatient visits, and pharmacy care, including items identified in our testing as well as other care not billed; and collecting third party reimbursements due to the government to the fullest extent allowed as required by DOD policy; require MTFs to maintain an itemized list of the names and quantities of drugs to be returned to the pharmaceutical return goods contractor for credit or disposal, and require MTFs to routinely monitor and evaluate, based on the management reports provided by the contractor and the pharmaceutical prime vendor, the credits received from the returns of drugs and net losses of those drugs to use as an indicator in determining whether on hand inventory levels are appropriate; require property office management to maintain, and have readily available, independent documentation supporting the cost and date of acquisition for all accountable personal property; require property office management to promptly report the loss of any personal property items detected during their periodic physical inventories, and to adjust the property records accordingly; and review and modify the existing processes and requirements to improve documentation of purchase card transaction approvals, independent receipt of the items, and invoices to better verify costs and quantities. DOD provided written comments on a draft of this report. DOD concurred with our recommendations and identified corrective actions planned and underway related to eligibility for health care and collections from third party insurers. In addition, both the Deputy Secretary of Defense and the Executive Director of the TRICARE Management Activity have recently issued guidance on the use of government purchase cards. DOD’s comments are reprinted in appendix IV. DOD also provided additional comments, which we have incorporated as appropriate or responded to at the end of appendix IV. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from the date of this letter. At that time, we will send copies of this report to the Chairmen of the Subcommittee on National Security, Veterans Affairs and International Relations and the Subcommittee on Government Efficiency, Financial Management and Intergovernmental Relations; House Committee on Government Reform and other congressional committees. We are also sending copies to the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; the Surgeon General of the Air Force; the Surgeon General of the Army; the Surgeon General of the Navy; the Secretary of the Air Force; the Secretary of the Army; the Secretary of the Navy; and the Commanders of Eisenhower, Portsmouth, and Wilford Hall. Copies will be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Linda Garrison at (404) 679-1902 or by e-mail at [email protected] if you or your staffs have any questions about this report. An additional contact and staff acknowledgments are listed in appendix V. We used a case study approach to review key internal control activities in five areas—eligibility, third party billings and collections, pharmacy expired drugs, personal property management, and government purchase card usage at three MTFs. Our work was performed at three large, diverse medical facilities—Eisenhower Army Medical Center, Augusta, Georgia (Eisenhower); Naval Medical Center Portsmouth, Portsmouth, Virginia (Portsmouth); and Wilford Hall Air Force Medical Center, San Antonio, Texas (Wilford Hall). We also performed work at the TRICARE Management Activity in Falls Church, Virginia. This was not a financial audit; as a result, we do not render an opinion on the internal controls or any financial data or financial statements. Also, the results of our review cannot be projected beyond the three case study MTFs. Since we were not testing the internal controls as a part of a financial audit, we did not perform tests of the general or application electronic data processing controls. We also did not assess the overall control environment or perform a comprehensive risk assessment nor did we independently verify DOD’s financial information used in this report. To determine whether the key internal control activities were effectively implemented, we reviewed applicable laws and regulations; our Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999); and our Internal Control Standards: Internal Control Management and Evaluation Tool (GAO-01-1008G, August 2001). We obtained an overview of the process and gained an understanding of the policies, procedures, techniques, and mechanisms used to help ensure that management’s directives were carried out. We interviewed and observed management and personnel at the three MTFs and the TRICARE Management Activity. We also reviewed relevant audit reports from defense audit agencies and the DOD IG. Further, we performed targeted analyses of fiscal year 2001 transactions and control activities in the five areas. To determine whether control activities used to identify those eligible for care were effective, we observed whether staff members in various clinics and sites throughout the MTFs were asking patients for military identification cards and querying the clinical system for eligibility status, and compared a file of all patients receiving prescriptions in fiscal year 2001 at one facility to an SSA file of all persons who had died in order to identify patients who either had erroneous social security numbers in the clinical system or who might be ineligible for care. The other two facilities were unable to readily provide comparable information. To determine the effectiveness of the third party billing and collection internal control activities, we (1) tested a nonrepresentative selection of patients from 1 day each quarter during fiscal year 2001 to determine whether the facilities were systematically obtaining and updating patient insurance information, (2) tested a nonrepresentative selection of incidents of patient care that should have been billed, (3) reviewed the timeliness of a selection of third party insurance bills, and (4) analyzed the third party insurance collections. To determine whether control activities over expired and obsolete drugs were effective, we (1) observed the pharmaceutical returned goods contractor pickup of expired drugs, (2) discussed with pharmacy and contractor personnel procedures and requirements for inventorying the expired drugs collected, and (3) obtained contractor-provided inventory lists of expired drugs turned in. To determine the effectiveness of the control activities over personal property management, we performed tests of the existence, completeness, and accuracy of the cost and acquisition date recorded in the personal property records. To test existence, within each medical center we stratified the population of personal property items by the dollar value recorded as the purchase price for the item. We selected a stratified random probability sample of 100 personal property items recorded on the property records at each of the three facilities. With these statistically valid random probability samples, each transaction in the property records had a nonzero probability of being included, and that probability could be computed for any transaction. Each sample item was subsequently weighted in the analysis to account statistically for all the property records in the population at that location, including those that were not selected. For each property item in the sample, we tested the physical existence of the item and compared the facility-assigned bar code and serial number in the property record to that attached to the property item. An error was recorded if MTF personnel (1) could not locate the item or (2) located the item, but the serial number on the item did not match that in the property record. We also examined the documentation supporting the date and cost of acquisition for each property item in the sample. Because we followed a probability procedure based on random selections of property items, our sample for each facility is only one of a large number of samples that we might have drawn. Since each sample could have produced different estimates, we express our confidence in the precision of our particular samples’ results (that is, the sampling error) as 95 percent two-sided confidence intervals. These are intervals that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true (unknown) values in the study population. We also generated one-sided 95 percent confidence intervals around the overall results at each MTF and used them to assess whether the controls at each MTF over personal property were effective, ineffective, or partially effective. If the upper limit of a one-sided 95 percent confidence interval was 5 percent or less, we considered the controls effective. If the lower limit of a one-sided 95 percent confidence interval was 10 percent or more, we considered the controls ineffective. Otherwise, we considered the controls partially effective. Although we projected the results of our samples to the population of items recorded in the property records at each of the medical centers, the results cannot be projected to the population of all property records at all of the MTFs. In addition to our review of the existence of items recorded in the property records and the accuracy of the facility-assigned bar codes and serial numbers of the items, we also tested the completeness of the property records by selecting an item located next to all items in our sample that they were able to find. We then traced the bar code and serial number of the item back to the property records. In order to test the accuracy of the cost and acquisition date recorded in the personal property records for the sample items, we obtained and reviewed any supporting documentation available from property management personnel. To test internal control activities in the use of the government purchase card, we utilized two different approaches. To test the implementation of specific control activities at Eisenhower, 150 transactions were selected in a stratified random probability sample drawn from the population of transactions paid from October 1, 2000, through July 31, 2001. The methodology for the statistical sample is presented in the June 2002 GAO report, Purchase Cards: Control Weaknesses Leave Army Vulnerable to Fraud, Waste, and Abuse (GAO-02-732). The statistical sample allowed for projection of an estimate of the percentage of transactions for which each control activity tested was not performed. We also evaluated the control environment and did data mining at Eisenhower. For Portsmouth and Wilford Hall, we obtained files of all purchase card transactions made during fiscal year 2001. From these files, we tested a nonrepresentative selection of 50 transactions for each medical facility to test the implementation of specific control activities and to determine if indications exist of potentially fraudulent, improper, and abusive or questionable transactions. Our data mining included identifying transactions with certain vendors that had a more likely chance of selling items that would be unauthorized or that would be personal items. Because of the large number of transactions that met these criteria, we did not look at all potential abuses of the purchase card. We requested that each facility provide all documentation supporting the purchases and each of the control activities. If no documentation was provided, or if the documentation provided indicated there were further issues, we obtained additional information through interviews with cardholders and other hospital or purchase card officials. While we identified some potentially fraudulent, improper, and abusive or questionable transactions, our work was not designed to identify, and we cannot determine, the extent of potentially fraudulent, improper, or abusive transactions. The data mining techniques used at Wilford Hall and Portsmouth did not allow for a projection of an estimate of the effectiveness of key internal control activities. Although we projected the results of the purchase card sample to the populations of transactions at Eisenhower, the results cannot be projected to the population of all purchase card transactions at all of the MTFs. We briefed DOD officials at the three MTFs and at the TRICARE Management Activity on the details of our review, including our findings and conclusions. We requested comments through the DOD Office of the Inspector General, which distributed the report to the appropriate officials. We received written comments from the Office of the Assistant Secretary of Defense for Health Affairs, which also included copies of comments from the Surgeons General of the Air Force, Army, and Navy. DOD’s response, including additional comments and a technical comment are reprinted in appendix IV. However, we did not reprint the comments from the three Surgeons General that formed the basis of the DOD response. We performed our work from August 2001 through June 2002 in accordance with U.S. generally accepted government auditing standards. Table 4 displays overall estimated existence error rates and associated two- sided 95 percent confidence intervals for personal property at each of the three facilities, as well as error rates for personal property with a recorded purchase price of $1,000,000 or more. The following are GAO’s comments on the Department of Defense’s letter dated September 27, 2002. 1. Report number was changed to reflect issuance in fiscal year 2003. 2. The MTF did not maintain a list of non-narcotic drugs awaiting pick up by the contractor in either its former system or the one to which it was transitioning. 3. We have not been provided documentation indicating that the MRI was returned for credit. The point of the finding is that the property records were inaccurate at the time of our review. Staff members making key contributions to this report were Shawkat Ahmed, Mario Artesiano, Rathi Bose, Francine DelVecchio, Alfonso Garcia, Janine Prybyla, and Sidney Schwartz. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The $24 billion Military Health System provided health care to over 8 million eligible beneficiaries. Although Congress has provided sizeable increases in funding for health care over the past few years, the Department of Defense (DOD) has needed supplemental appropriations for 6 of the last 8 fiscal years from 1994 to 2001 because its costs were higher than expected. The growing budgetary pressure increases the risk of not achieving the mission of the organization. DOD's military treatment facilities (MTF) represent over half of DOD's health care expenditures. The three MTF's reviewed have not effectively implemented internal control activities in the areas of eligibility, billings and collections, expired drugs, personal property management, and government purchase card usage. The three MTFs also did not identify all patients with third party insurance coverage. In addition, they frequently did not bill those insurers even when they knew that such coverage existed, thereby losing opportunities to collect millions of dollars of reimbursements for services. Ineffective physical and financial controls over personal property assets and indications of control breakdowns in the use of government purchase cards existed at the three facilities.
The F/A-18E/F is currently undergoing development flight testing as part of its engineering and manufacturing development (EMD) phase of the acquisition cycle. The development flight test program is under the responsibility of the Integrated Test Team, which consists of Navy and contractor personnel. The team also receives support from the Navy’s Operational Test and Evaluation Force. The F/A-18E/F development flight test program began in February 1996 at the Naval Air Warfare Center, Patuxent River Naval Air Station, Lexington Park, Maryland. The Integrated Test Team is using the seven test aircraft provided by Boeing (formerly McDonnell Douglas) under the EMD contract. The seven aircraft consist of five single-seat E models and two 2-seat F models. Boeing has also built 3 ground test article aircraft to use in conducting tests at its St. Louis, Missouri, facility, and General Electric Corporation, Lynn, Massachusetts, has delivered 21 engines for flight testing. The Navy plans to procure 62 low-rate initial production aircraft in 3 separate procurement lots. In March 1997, the Navy received approval to procure 12 aircraft under the first low-rate initial production lot. The decision to approve the procurement of the next 20 aircraft under the second low-rate initial production lot was scheduled for the end of 1997,and the decision to approve the procurement of the final 30 aircraft under the third low-rate initial production lot is scheduled for late 1998 or early 1999. The primary purpose of the development test program is to identify system deficiencies so they can be corrected and have a production representative aircraft ready to begin Operational Test and Evaluation in May 1999. As the flight test program progressed, delays were encountered due to events that normally occur during testing, such as inclement weather conditions and required equipment maintenance. Testing delays were also caused by unanticipated events. For example, in the summer of 1996, a 3-month machinist strike at the airframe contractor’s plant delayed the delivery of the last three EMD aircraft and, in turn, delayed the testing that was to be done on these aircraft. Also, in November 1996, an in-flight engine failure occurred at the Patuxent River test range, which stopped flight testing for 2 months on all but one EMD aircraft—an F model that was being prepared for initial carrier qualification flights. F/A-18E/F program management developed a revised flight test plan that will help cope with the delays in the original flight test program. Development of the revised plan began with an Integrated Test Team meeting in September 1996. According to the minutes of that meeting, the team reviewed flight test data and revised the original flight test plan by identifying areas in which testing could be reduced but essential program requirements and goals could still be met. At the time of our review, however, the revised flight test program was about 4 weeks behind schedule. Program documents predict that, although flight testing is behind schedule, decisions to reduce test points will enable the Navy to regain lost time. The documents state that the Navy anticipates completing development testing in November 1998 and begin operational testing in May 1999, as originally planned. In the meantime, program officials plan to conduct monthly reviews to identify additional areas that can be deleted from the flight test program. The Integrated Test Team and F/A-18E/F program management officials stated that, while the elimination of some data collection requirements might add some risk to the E/F program, the risk is at an acceptable level. The Navy’s F/A-18E/F Integrated Test team established a system for identifying deficiencies during the development program. That system, which is described in appendix II, identified over 400 deficiencies as of December 1997. The number of deficiencies changes constantly as some are resolved and others are identified. The deficiencies include problems with E/F flying qualities, structural concerns that could have a negative impact on the aircraft’s service life, engine deficiencies that could impact aircraft performance and engine life, and weapon separation problems that cause bomb-to-bomb collisions that require additional testing. The Navy also established a Program Risk Advisory Board. The Board identifies deficiencies from flight or ground test data and assesses the risk that the deficiencies represent to the program. Boeing also identifies deficiencies during flight or ground tests that it believes represent a risk to the program and develops mitigation plans for resolving these risks. As of September 1997, the Board and Boeing had identified 33 and 38 program risks, respectively. A listing of risk items and their assigned level of risk by the Board and Boeing is in appendix III. Although many of the deficiencies have not been resolved, Navy program management continues to project that the F/A-18E/F will be ready for operational testing as scheduled in May 1999 and that the aircraft will meet all operational performance requirements. On the other hand, the Navy’s Program Risk Advisory Board stated in July 1997 that the Navy’s Operational Test and Evaluation Force may find that the E/F is not operationally effective or suitable. According to program officials who are members of the Board, the Board’s assessment reflects the realization that the F/A-18E/F may not be as capable in a number of operational performance areas as the most recently procured C model aircraft, which are equipped with an enhanced performance engine. This issue was addressed in a classified December 1997 Operational Test and Evaluation Force report. That report was requested by the F/A-18E/F program office. The report is referred to as a Quick Look Report because it represents the Operational Test and Evaluation Force’s preliminary conclusions based on a limited analysis of data collected during its operational assessment completed in November 1997. The Quick Look Report identified 16 major deficiencies with the E/F, such as air-to-ground sensor performance, air-to-ground weapons, air-to-air sensor performance, and survivability. However, the report concluded that the F/A-18E/F is potentially operationally effective and potentially operationally suitable. The report also confirmed the Program Risk Advisory Board’s concerns regarding certain classified operational performance characteristics of the E/F compared with the operational capabilities of the F/A-18C. In addition, the report indicated that the Operational Test and Evaluation Force’s final report, scheduled to be issued in March 1998, will be based on more detailed analysis of available data and may contain modified conclusions. The following section discusses selected risk items that were identified by program officials and documents as significant concerns, including items discussed in our previous report on the F/A-18E/F program. These items are wing problems, new technology advances, engine challenges, weapons separation problems, and horizontal and vertical tail problems. In March 1996, during flight testing at the Patuxent River Naval Air Station, the F/A-18E/F experienced wing drop. The Navy and Boeing describe the phenomenon as an unacceptable, uncommanded abrupt lateral roll that randomly occurs at the altitude and speed at which air-to-air combat maneuvers are expected to occur. A joint Navy/Boeing team concluded that wing drop was caused by a loss of lift on one of the outer wing panels during maneuvering. According to Navy and Boeing officials, wing drop is the most challenging technical risk to the F/A-18E/F program. The deficiency has been classified by Boeing and the Program Risk Advisory Board as a medium technical, schedule, and cost risk to the low-rate initial production phase of the E/F program. Program officials consider wing drop to be a high-risk deficiency. The F/A-18E/F Integrated Test Team concluded that if wing drop is not corrected, it will prevent or severely restrict the performance of the F/A-18E/F during air-to-air combat maneuvering. The F/A-18E/F Program Risk Advisory Board concluded that this deficiency would cause the aircraft to be unacceptable for operational test and evaluation and could result in a schedule slip. Boeing and the Navy have continued their attempts to define the cause of wing drop and identify potential solutions. For example, 25 potential wing modifications have been tested in a wind tunnel. Flight hardware to test two leading-edge wing modifications have been designed and fabricated, and flight testing of the modifications has begun. One of the leading-edge wing modifications provided no improvement. The other provided improvement for turns above 20,000 feet, but improvements are still needed for air-to-air tracking tasks and turns at 15,000 feet and below. In September 1997, a Blue Ribbon panel concluded that an intermediate solution to wing drop would be to fix both the leading and trailing edges of the wing. The Blue Ribbon panel further proposed that a total wing redesign should be considered as the long-term solution to wing drop. In November 1997, the Assistant Secretary of the Navy for Research, Development, and Acquisition advised the Secretary of the Navy that the low-cost, quick fixes have improved aircraft performance but have not completely resolved the wing drop issue. The Assistant Secretary also stated that the best and worst case scenarios for resolving the problem ranged from a combination of software changes with simple wing modifications, which should not impact production and acquisition plans, to a more complex and lengthy wing redesign, which would impact production and acquisition of the aircraft. In January 1998, program officials told us that the F/A-18E/F will not require a major wing redesign. This assessment is based on their assumption that although wing modifications that are currently under investigation might not entirely eliminate the possible occurrence of wing drop, the modifications would reduce wing drop effects to an acceptable level. Until the Navy identifies and completes its flight testing of these wing modifications, their impact on such things as the F/A-18E/F’s speed and maneuverability, range, weight, and the planned reduced radar cross section of the aircraft to increase its survivability, will not be known. Program officials estimated that they will be able to quantify these performance impacts and decide on the best solution to the wing drop problem by March 1998. This plan coincides with the next major funding decision for the F/A-18E/F program, which will be a decision by the Assistant Secretary of the Navy for Research, Development, and Acquisition on whether to approve full funding of the next 20 aircraft under the second of three low-rate initial production decisions. New technology features (the details of which are classified) have been incorporated into the F/A-18E/F to improve its survivability by reducing the aircraft’s susceptibility to being detected by enemy radar. The Integrated Test Team has documented new technology anomalies that could negatively affect the new technology features to be incorporated into the aircraft. In September 1997, Boeing and the Navy’s Program Risk Advisory Board listed new technology concerns as a high risk to the F/A-18E/F program. The new technology anomalies include such things as seal failures, damage to special coatings, door latches, wing delaminations, and the aircraft’s wind screen. Efforts to correct these problems are ongoing. For example, Boeing has been training its maintenance crews on the proper cleaning and application methods of seals to reduce the failures that have occurred. Longer term production fixes call for redesigning such things as doors and hinges. Further, the test aircraft have received structural repairs to address large delaminations that have occurred on the underside of the aircraft from blown tires. However, these repairs used protruding fasteners that would be unacceptable in operational aircraft because they would negatively impact aircraft signature. Efforts are underway to develop better repair procedures for aircraft to be produced under the second and third low-rate initial production phases of the program. Boeing and the Navy have stated that there is currently no definitive answer as to the impact these changes will have on the reduced radar cross section of the E/F. They believe that the F/A-18E/F will have unacceptable operational test and evaluation results if the fixes do not work. However, if the fixes do work, they need to be included on the aircraft being produced under the first lot of low-rate initial production, because these aircraft will be used for Operational Test and Evaluation. If these fixes are not included, it is likely that operational evaluation will be unacceptable. The Program Risk Advisory Board has identified engine-related issues, including engine warm-up time required before carrier launch, partial engine flameouts during some test flights, visible engine smoke, and engine failures during flight and ground tests. In addition, high-pressure engine turbine blades that had been redesigned to reduce heat to achieve the required engine service life caused an in-flight engine failure. Consequently, the Navy decided to revert to the original turbine blade design. The Navy generally views the engine anomalies as a medium risk to the program. The engine contractor, on the other hand, is redesigning certain portions of the engine and views the engine as a low-risk component of the program. The engine contractor stated that engine anomalies and component redesign have delayed the EMD schedule by 6 to 8 months and increased cost by 4 percent. However, the contractor believes that it will meet the low-rate initial production schedule by extending the work schedule as required. The Navy, however, has expressed concern over engine problems. For example, the Integrated Test Team stated that (1) stalls that occur prior to engine warm-up will preclude the performance of the deck launch intercept mission, which is defined as 5 minutes from engine start to launch; (2) visible engine smoke would increase the overall visibility of the aircraft, which may result in earlier visual acquisition of the aircraft by adversary pilots; and (3) engine flameouts and stalls could result in the destruction of the engine. The Program Risk Advisory Board stated that these engine deficiencies may make the F/A-18E/F unacceptable for operational evaluation or may jeopardize successful operational evaluation. The F/A-18E/F is designed to have more payload capacity than current F/A-18C/Ds as a result of adding two new wing stations to carry external stores. Early wind tunnel tests conducted in July and August 1993 showed that some stores would hit the side of the aircraft or other stores when released. The Navy and Boeing identified the cause of weapon separation problems as the adverse air flow created by the E/F airframe. Boeing spent about 1 year developing and testing several improvement concepts before selecting a redesigned pylon as the intended fix to the stores separation problem. Weapon separation testing with the redesigned pylon began in February 1997 and is expected to continue through November 1998. As of September 1997, the weapon separation problem was classified by Boeing and the Navy Program Risk Advisory Board as a medium technical risk to the EMD phase of the E/F program. In its risk assessment, Boeing stated that if stores separation problems continue to occur during testing, additional changes would be required. In recent flight tests during November and December 1997, bomb-to-bomb collisions occurred when releasing certain weapons. In addition to the weapon separation problems, recent tests have revealed that noise and vibration may cause structural damage to stores being carried under the wing. Currently, this problem is resulting in speed limitations on the aircraft when carrying certain weapons. The F/A-18E/F experienced delaminations, or peeling, in its horizontal tail stabilator. This deficiency was first identified during pre-production ground testing of the EMD aircraft design at the contractor plant in July 1995. The testing showed small areas where the metal substructure and the composite skin did not bond. The contractor used fasteners to ensure that any delaminations of the horizontal stabilator that occurred would not cause any in-flight failures. The contractor also initiated an inspection program every 25 flight hours for the problem area. All seven EMD test aircraft have been equipped with the redesigned horizontal stabilator. According to Boeing, no significant delaminations were occurring, therefore, the inspection frequency is being raised to 50 flight hours. A redesign of the horizontal stabilator for the low-rate initial production aircraft was completed in October 1996 and is currently undergoing testing. In November 1997, delamination occurred during testing of the redesigned stabilator. This resulted in a decision to stop production pending completion of a review of the delamination problem. In commenting on a draft of this report, the Department of Defense (DOD) stated that additional testing and analysis since November 1997 led to the conclusion that the original EMD stabilator design with fasteners is acceptable. The EMD aircraft are in the process of testing this design and, according to DOD, the low-rate initial production aircraft that will have this design will have the stabilators tested prior to delivery. DOD also stated that a slightly redesigned stabilator, to be used in aircraft that will be produced subsequent to the first lot of low-rate production aircraft, is undergoing testing that is scheduled to be completed this summer. The F/A-18E/F vertical tail has not been certified because it experienced deficiencies during testing early in the test cycle. This deficiency has been classified by both Boeing and the Program Risk Advisory Board as a medium technical risk to the low-rate initial production phase of the F/A-18E/F program. According to Boeing, all vertical tail design changes will be incorporated in the aircraft to be procured during low-rate initial production. However, the design changes resulted in a vertical tail weight increase of 20 pounds. An additional vertical tail redesign plan is in process. The purpose of the second redesign is to incorporate weight savings of 29 pounds and improve the tail’s producibility. The redesign is intended to provide a fully certified vertical tail at the start of the third low-rate initial production lot. Testing of the redesigned vertical tail is scheduled to be completed in late 1999. The Navy has consistently maintained that the F/A-18E/F will be developed and produced within the cost estimates established for the program. However, certain key assumptions on which the F/A-18E/F cost estimates were based have been overcome by events. These assumptions relate to such things as: no unanticipated issues during the development program; the number of aircraft to be bought, in total and on an annual basis; the ratio of the E and F models to the total number of aircraft to be bought; and inflation factors to be used in projecting future years’ costs. Adjusting these assumptions to reflect recent events will likely result in higher F/A-18E/F development and unit production costs than the Navy currently estimates. The development cost for the F/A-18E/F program has been capped by the Congress at $4.88 billion (1990 base year dollars). It will be a challenge for the Navy to stay within this cost because, according to Navy documents, that amount is adequate to fund the program based on the assumption that problems would not occur during testing. However, the program has experienced deficiencies; the development flight test program still has about 1 more year, and additional deficiencies may be identified during that time; and EMD funding reserves have nearly all been used. The Navy’s Program Executive Officer for tactical aircraft has raised concerns about the ability of the F/A-18E/F development effort to fund the correction of these deficiencies because the program’s EMD management reserves have diminished significantly. For example, Boeing’s EMD airframe management reserve has decreased from $256 million when the program began to $56.7 million in October 1997. This reserve was used to correct deficiencies as they developed. Of the $56.7 million, $50.9 million has been targeted for known deficiencies that have not yet been corrected, leaving a balance of $5.8 million. In addition, the $28 million EMD engine management reserve at General Electric has been depleted. According to an October 1997 F/A-18E/F program management status report, the lack of engine management reserve is a real concern considering that engine problems need to be corrected. According to the report, General Electric has not yet quantified the full cost impact, but future overruns are expected. The development flight test program will not be completed for another year. Program management has stated that the development flight test program is normally the most risky portion of the development effort. Therefore, if changes to correct known deficiencies fail or if additional deficiencies develop, the cost of correcting them will likely cause the $4.88 billion development cost estimate to be exceeded. The Navy also faces a challenge in procuring the F/A-18E/F within the unit cost originally estimated. Its unit procurement cost estimates have been based on what has become unrealistically high quantities of E/F aircraft that will be bought, a lack of factoring in the cost effect of the Navy’s decision to buy more of the higher cost F models than was factored into the original cost estimates, and an unrealistically low inflation factor for purchases in later years of the program. Originally, Navy projections of F/A-18E/F unit procurement costs were based on procuring 1,000 aircraft at a peak annual production rate of 72 aircraft. Neither of these assumptions are likely to be realized. The assumption that 1,000 E/F aircraft will be procured is not consistent with the outcome of the Quadrennial Defense Review and current Defense Planning Guidance. In May 1997, the Quadrennial Defense Review recommended that, due to funding constraints, the total procurement of F/A-18E/Fs should be reduced to 548 aircraft. The October 1997 Defense Acquisition Executive Summary Report revised the total F/A-18E/F procurement to 548 aircraft. In terms of the Navy’s assumed annual production rate of 72 aircraft, in March 1997 the Under Secretary of Defense for Acquisition indicated the annual E/F production rates would be lower. He directed that he be given the opportunity to review any plan to acquire production tooling that would support producing more than 48 aircraft per year. The May 1997 Quadrennial Defense Review report also recommended an annual production rate of 48 aircraft. According to information provided to you in July 1997 by the Director of Strategic and Tactical Systems, Office of the Secretary of Defense, the lower total buy will decrease the total procurement cost but increase the E/F’s unit procurement cost from $57 million to $64 million (fiscal year 1997 dollars). When the F/A-18E/F program was approved in 1992, the procurement plan called for the majority (820, or 82 percent) of the F/A-18E/F buy to be single-seat E models. Only 180, or 18 percent, of the 1,000 aircraft buy would be two-seat F models to be used for training purposes. However, the Navy has since decided that the majority of the total buy will now be two-seat F models that will require the crew members in the second seat to perform operational as well as training functions. According to program documents, the Navy is using a buy of 548 aircraft, as recommended in the Quadrennial Defense Review, for planning purposes. This buy will consist of 288 (about 53 percent) F model aircraft and 260 (about 47 percent) E model aircraft. This revised acquisition strategy has significant cost implications because, according to program officials, the two-seat F model will cost about $1.5 million more per aircraft than the single-seat E model. However, this cost differential is expected to increase. According to program documents, the back seat of the F will have to be upgraded to accomplish the operational missions that will now be assigned to that model. The cost of this upgrade, which is expected to be accomplished by 2005, has not been estimated. Navy unit procurement cost estimates for the 15-year F/A-18E/F acquisition program assume an annual inflation rate that is provided by the Office of the Secretary of Defense. The unit procurement cost estimates in the Navy’s F/A-18E/F Selected Acquisition Reports from program approval in 1992 through December 1995 were based on a 3-percent annual inflation factor, which measures the general inflation of the U.S. economy rather than the inflation rate for the aerospace industry. The December 1996 Selected Acquisition Report stated a lower projection of E/F unit procurement cost based on a lower 2.2-percent annual inflation factor. According to program documents, the inflation rates provided by the Office of the Secretary of Defense for budget estimating are lower than escalation indexes developed from historical escalation data published by the Bureau of Labor Statistics, which uses the Data Resources Incorporated econometric forecasting model for the aerospace industry. According to E/F program management, the escalation factors generated by the model will be used as a baseline to negotiate E/F procurement cost. Table 1 compares the DOD annual inflation rates with aerospace industry annual inflation rates. Using the higher aerospace industry inflation rates would substantially increase the F/A-18E/F unit procurement cost estimate. The use of understated inflation rates to estimate unit cost is not unique to the F/A-18E/F program. We have issued reports that discuss the impact of understated inflation rates. The ongoing test program has identified numerous deficiencies with the F/A-18E/F aircraft. The Navy’s system for identifying the program risk associated with these deficiencies indicates that several of them are significant. As of March 1998, the Navy had not decided how to resolve some of the deficiencies or predicted the costs involved in resolving them. A Navy board established to identify risks to the F/A-18E/F program has stated that, until several of the deficiencies have been resolved, the Operational Test and Evaluation portion of the F/A-18E/F program, scheduled to begin in May 1999, might slip or that the F/A-18E/F will have an unsuccessful Operational Test and Evaluation. We recognize that the F/A-18E/F development test program has nearly 1 year remaining before it is scheduled to be completed. Therefore, the Navy still has time to try to resolve the deficiencies being identified during the test program. However, additional deficiencies may be identified before the test program is completed. The issue is how much time and money will be required to satisfactorily resolve these deficiencies. This will not be known until the E/F has completed its Operational Test and Evaluation. The deficiencies discussed in this report were identified prior to DOD’s March 1997 decision to approve the E/F program to enter low-rate initial production. DOD’s approval to advance the program into production indicates its optimism and willingness to accept the risk that these deficiencies, and any additional deficiencies that might arise, will be resolved with little or no cost, schedule, or performance impact on the program. Program documents indicate, however, that correcting some of these deficiencies, such as the wing drop problem, could have significant cost, schedule, and performance impacts on the F/A-18E/F program. We believe that DOD and the Navy need to adopt a more cautious approach as they make funding decisions for the E/F program and prepare for Operational Test and Evaluation of the aircraft. Therefore, we recommend that the Secretary of Defense direct the Secretary of the Navy to not approve contracting for any additional F/A-18E/F aircraft beyond the 12 aircraft contracted for during the first low-rate production phase of the program until the Navy demonstrates through flight testing that identified aircraft deficiencies have been corrected. This will still provide the Navy with the necessary aircraft to conduct operational testing of the F/A-18E/F. We also recommend that the Navy not begin Operational Test and Evaluation of the F/A-18E/F until corrections of deficiencies are incorporated in the aircraft that will be used for the evaluation. In commenting on a draft of this report, DOD partially concurred with both of our recommendations. Regarding our recommendation that no additional aircraft be contracted for until flight testing has demonstrated that aircraft deficiencies have been corrected, DOD stated that its testing to date has not identified any specific deficiencies that are predicted to prevent achieving an operationally effective level of performance. DOD also stated that it would ensure that the solution to the wing drop problem has been demonstrated before proceeding with full funding of the second low-rate production lot of the aircraft. DOD further stated that the Secretary of Defense has said that these funds would not be released until he is satisfied that the wing drop problem has been corrected. We believe the same level of commitment is needed relative to the other deficiencies that the F/A-18E/F Integrated Test Team has identified, such as the engine and weapon separation problems. Regarding our recommendation that Operational Test and Evaluation of the F/A-18E/F not begin until correction of deficiencies are incorporated in the aircraft to be used for operational evaluation, DOD stated that it agreed that operational evaluation should begin in May 1999 with production representative aircraft that have incorporated needed corrections. The underlying basis of our recommendation is that the Navy needs to demonstrate through flight testing that all the required fixes have been made and incorporated in the test aircraft before beginning Operational Test and Evaluation, even if the schedule needs to slip beyond May 1999. This approach would provide a sound basis for evaluating and quantifying the capabilities of the aircraft that will be provided to the fleet. This evaluation is particularly important because the F/A-18E/F will be the Navy’s primary fighter aircraft until the Joint Strike Fighter becomes available. A realistic comparison on the operational capabilities of the E/F with the newest F/A-18C/Ds currently in the fleet would provide the basis for a decision on how many E/F aircraft the Navy should ultimately procure as replacements for the C/D aircraft. In addition to its comment on our recommendations, DOD provided specific comments on other portions of our draft report. DOD’s comments and our response appear in appendix IV. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to interested congressional committees; the Secretaries of Defense and the Navy; and the Director, Office of Management and Budget. We will also make copies available to others upon request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix V. To evaluate the status of the test program, we gathered and evaluated all F/A-18E/F flight test deficiency reports prepared as of December 4, 1997, by the F/A-18E/F Integrated Test Team. We interviewed the team’s management, the F/A-18E/F contractors (Boeing Corporation, St. Louis, Missouri, and General Electric Corporation, Lynn, Massachusetts), E/F program management, and the Navy’s Operational Test and Evaluation Force’s test personnel about the implications of documented program deficiencies on program cost, schedule, and performance. To determine which deficiency areas the Navy and the Program Risk Advisory Board determined to be risks to the F/A-18E/F program, we obtained Program Risk Advisory Board risk assessments and interviewed Board officials. We interviewed Navy program management and contractor officials about the implications of these risks on program cost, schedule, and performance. We discussed with the contractors their identified F/A-18E/F engineering and manufacturing development (EMD) and low-rate initial production program risks and the implications on program cost, schedule, and performance. We obtained detailed information on the potential cost, schedule, and performance impact of medium- to high-risk areas. We interviewed Defense Contract Management Command officials at Boeing and General Electric Corporations about their role in on-site monitoring and evaluation of the contractors’ F/A-18E/F development efforts, E/F deficiencies, and development risks facing the contractors. We also obtained documents in which the Command formally reported its findings to Navy headquarters. We interviewed Operational Test and Evaluation Force officials about their role in evaluating the E/F and plans for conducting future operational testing. To address F/A-18E/F development and procurement cost issues, we interviewed program and contractor officials responsible for financial matters and received briefings and answers to our questions concerning program cost. We conducted our review from May 1997 to January 1998 in accordance with generally accepted government auditing standards. The Integrated Test Team categorizes deficiencies it identifies during flight testing in either watch item, white sheet, or deficiency reports. Watch item reports document deficiencies that require design or software changes that need management attention. White sheet reports document deficiencies for which no fix has been identified, a fix has failed re-evaluation, or a fix impacts significant test events. Deficiency reports are submitted when an identified fix fails a second retest or time is needed to develop a plan of action. Initially, deficiencies are documented in watch item reports. If not corrected, they are sequentially escalated to a white sheet report and finally to a deficiency report. Therefore, the number of deficiencies in each of these categories changes continually as new deficiencies are identified, resolved, and moved among the categories. As of October 1997, the Integrated Test Team had categorized 370 deficiencies in watch item reports, 88 deficiencies in white sheet reports, and 30 deficiencies in deficiency reports. Deficiencies within each of these categories are also classified by their severity. The most severe of these classifications is a deficiency in which there is a high probability that it will cause aircraft control loss, equipment destruction, or injury to flight test personnel. Table III.1 shows the risks identified by Boeing in the F/A-18E/F EMD and low-rate initial production (LRIP) program and by the Navy’s F/A-18E/F Program Risk Advisory Board (PRAB) at the September 1997 Program Management Review. Blank cells indicate that Boeing or PRAB did not identify these as risk items as of September 1997. EMD risk (Boeing) LRIP risk (Boeing) Program risk (PRAB) Contractor quality assurance inspection transition Environmental control system aft center fuselage overheating Bleed cell 4 heat exchanger leak detection system Drift-free pressure transmitter set sensors Antenna producibility (Boeing) Antenna producibility (Northrop) Proposed specification change notice impact (Northrop) Engine full production qualification schedule Engine exhaust smoke at LRIP Engine bay fire extinguisher system Follow-on test and evaluation program definition (continued) EMD risk (Boeing) LRIP risk (Boeing) Program risk (PRAB) Ground station automated maintenance environment Multipurpose color display/up-front control display New technology producibility and performance Operational test requirements versus expected performance Operational test requirements versus specification performance Engine mounts (spares) The following are GAO’s comments to DOD’s letter dated February 9, 1998. 1. The first operational assessment, during which operational testers flew the E/F aircraft, was conducted by the Operational Test and Evaluation Force in November 1997. The preliminary report on that assessment, referred to as a Quick Look Report, identified 16 major deficiencies that must be corrected prior to the commencement of Operational Test and Evaluation. Further, the statements in our report concerning the possibility that the E/F might not achieve an operationally effective level of performance until identified deficiencies are corrected were taken directly from documents and reports prepared by the F/A-18E/F Integrated Test Team. 2. DOD’s comments stated that our final report should compare the consequences of not providing full funding for the second lot of LRIP aircraft because this would result in a production break and involve considerable costs. The Navy has not delivered any of the 12 aircraft being built under the first LRIP contract. The first aircraft is scheduled to be delivered in 1999, or about 20 months from the time of initial low-rate production decision, followed by the production of 1 aircraft per month until all 12 aircraft are completed. This schedule gives the Navy time to reassess its F/A-18E/F production plans. This reassessment should consider the cost and schedule tradeoffs of stretching out the production of the first 12 aircraft compared with proceeding with the current production schedule and accepting the potential for costly modifications and retrofits that may be required to correct current and future deficiencies. 3. We have revised the wording of our recommendation to clarify that we were referring to delaying Operational Test and Evaluation until corrections of deficiencies are incorporated in the aircraft that will be used for the evaluation. 4. DOD’s comments addressed the original test plan. Our report addressed the revised test plan. The point we make in our report is that the revised development test plan is focused on maintaining a development test schedule that will not cause delays in beginning the next phase of testing—Operational Test and Evaluation. Maintaining the test schedule will be a challenge because program documents state that E/F management anticipates that the remainder of the flight test program will experience an increase in testing requirements similar to what DOD’s comments stated has already occurred. This issue was addressed in an August 1997 flight test program review. The result of that review was that further increases in test requirements will have to be offset with corresponding reductions in the baseline test program. 5. We agree that finding discrepancies from predicted performance is the purpose of flight testing. However, inherent in the flight test program should be quantifying the effect that the correction of deficiencies will have on the E/F’s ability to meet its Key Performance Parameters. That is the underlying basis for our recommendation that no additional aircraft be produced until flight testing has validated the Navy’s predictions that the deficiencies being identified by the Integrated Test Team are resolved. 6. Our report addresses the need to determine the operational performance of the E/F after the correction of deficiencies have been incorporated in the aircraft. For example, the Blue Ribbon Panel that studied the wing drop problem stated that proposed fixes are expected to increase drag on the airplane, which could degrade the aircraft’s range. This finding is significant because range is one of the E/F’s Key Performance Parameters and one of the key improvements over the existing F/A-18C/D that the Navy cited in justifying the procurement of the E/F. Program management range estimates in January 1998 show that the F/A-18E has a slight range margin compared with F/A-18E/F threshold requirements (400 nautical miles versus 390 nautical miles with 2 external fuel tanks and 450 nautical miles versus 430 nautical miles with 3 external fuel tanks, respectively). The F/A-18F, which is heavier and has less internal fuel capacity than the E model, will have less range than the E model. The final operational performance of the E/F’s range and other Key Performance Parameters will not be known until all deficiencies have been corrected and their impact on the aircraft has been quantified. 7. We recognize that the March 1997 Operational Requirements Document contains the Key Performance Parameters that will be measured when evaluating the operational capabilities of the E/F. However, that document stipulates that the aerodynamic performance of the E/F is required to be as good as Lot XII F/A-18C/Ds. These C/D aircraft were built in the late 1980s and early 1990s. They are not as operationally effective as the more currently procured C/Ds that have been equipped with enhanced performance engines. 8. We reviewed the Operational Test and Evaluation Force’s Quick Look Report on the November 1997 operational assessment and could not verify DOD’s statement that the assessment found that the slight reduction in acceleration and maneuvering energy of the E/F had no significant tactical impact. Therefore, we discussed DOD’s statement with Operational Test and Evaluation Force officials who conducted the operational assessment. According to those officials, the Quick Look Report did not contain the cited conclusion. The officials cautioned, however, that they did not disagree with DOD’s comment because the operational impact of the E/F’s slight reduction in acceleration and maneuvering energy will depend on the specific mission profile (e.g., altitude and speed) and aircraft configuration (e.g., weapons being carried) that is being flown. In some cases, the C/D will out perform the E/F and vice versa. The officials also cautioned that its Quick Look Report was based on its preliminary analysis of limited data and stated that its evaluation of E/F operational capabilities might be modified after additional analyses are conducted. 9. We discussed the Cost Analysis Improvement Group’s March 1997 cost estimate with group members who prepared the estimate. These officials told us that the estimate was based on a total E/F buy of 1,000 aircraft and an annual peak production of 72 aircraft. The estimate was not based on the currently planned procurement of 548 aircraft and an annual peak production of 48 aircraft. Additionally, the officials told us that they did not factor in the increased development and procurement costs of upgrading the back seat of the F model to enable it to perform its assigned missions because the cost of the upgrade has not been determined. Furthermore, the March 1997 estimate, like the E/F program management estimate, used DOD-directed annual inflation rates and not the higher aerospace industry inflation rates that we discussed in our report. All of these factors understate the E/F cost estimates. 10. DOD’s December 1996 Selected Acquisition Report (the most currently available) shows that operation and support costs for a 12-aircraft E/F squadron will be about $3.2 million greater per year than a similar-sized F/A-18C/D squadron. This estimate represents an increase of over $1 billion when extrapolated over the E/F fleet and a 20- to 30-year service life. Therefore, we disagree with DOD’s comment that lower E/F operation and support costs will lower the E/F cost estimate. 11. In addition to the statements from the wing drop Blue Ribbon Panel that DOD included in its comments, the Panel stated that more flight test points are required in order to optimize the combination of fixes and to confirm the fixes at all points in the flight envelope. The Panel stated that this flight test approach was necessary because the underlying flow mechanisms of wing drop are not well understood due to the lack of adequate wind tunnel test techniques and practical computational procedures. In addition, the Panel stated that, although it is optimistic that an acceptable combination of fixes can be found, some of the more promising fixes will increase drag to some extent, may impact the observability characteristics, and may alter the design loads on the wing and flap components. The Panel further stated that these impacts must be quantified, and appropriate tradeoffs must be made to determine the optimum configuration and to assess the performance impacts. The Panel’s statements are consistent with the recommendations in our report. 12. DOD’s comment that the E/F program is committed to implementing all required fixes on the aircraft prior to Operational Test and Evaluation is based on DOD’s confidence that predictive tools will help resolve any radar cross section issues that might arise as a result of incorporating solutions to deficiencies. Our position is that solutions will not be known until they are assessed during flight testing rather than through simulation and modeling. Our position was substantiated by the Fiscal Year 1997 Annual Report of the Director, Operational Test and Evaluation, dated February 1998. The report stated that a challenge to the operational test program will be to design a strategy that will be able to determine if the F/A-18E/F will be more survivable than the F/A-18C/D, which is a key requirement of the E/F program. According to the report, existing models have many limitations in the ability to make this determination, and efforts to improve these predictive tools will not likely be mature in time to support the E/F program. 13. The engine fixes discussed in DOD’s comments have not yet been demonstrated and validated during flight testing, and DOD’s statement that the visible engine exhaust issue has been resolved for some time is not supported by program documents. In December 1997, the PRAB listed engine smoke as a medium-risk item that, if not corrected, will make the aircraft unacceptable for or jeopardize successful Operational Test and Evaluation. In addition, a December 1997 F/A-18E/F Propulsion and Power Program status report raised a number of recent engine concerns. The report stated that the major concern is keeping the engine development on schedule. Engine schedule slips to date could affect delivery of engines for the LRIP aircraft. Also, the engine is experiencing potentially problematic weight growth. The engine has reached its specification weight, and redesign changes to address a blade containment failure will cause the engine to exceed its specification weight. The program office has initiated a weight reduction study to identify ways to reduce engine weight by more than 56 pounds. In addition, the status report raised concerns about the engines inability to accept the growth necessary to accommodate the electronically scanned array radar that is a pre-planned product improvement for the E/F. According to the status report, a conscious decision was made to not design the engine for additional growth capability to avoid a major redesign of the back end of the aircraft to relocate the vertical tail. Taken in combination, these factors portray a less optimistic engine situation than indicated in DOD’s comments. 14. We discussed DOD’s statements with officials in the F/A-18E/F program office. The officials told us that the modification of the bomb release interval has not yet been flight tested. Also, weapon separation test data show that only about 21-percent of the testing has been done. It will not be known whether the weapon separation problem has been corrected until the testing has been completed. 15. We have revised our report to incorporate this information. 16. DOD’s comments discussed components testing. However, the vertical tail cannot be certified until the completion of tests of the tail attached to the aircraft. These tests are not scheduled to be completed until late 1999. 17. In a January 1998 program status report, program funding was listed as one of the major challenges facing the E/F program. The report stated that the EMD program is still funded at the “nothing goes wrong” level. Whether the EMD program will be completed within the congressional cost cap is not currently known. Steven F. Kuhta Jerry W. Clark William E. Petrick, Jr. Lawrence A. Dandridge Lillian I. Slodkowski The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the F/A-18E/F development program, focusing on the: (1) status of the E/F development flight test program; (2) deficiencies that have been identified to date and corrective actions planned; and (3) current cost estimate for the program. GAO noted that: (1) the Navy has revised the F/A-18E/F flight test program by decreasing the data collection requirements that were originally planned; (2) program documents state that, although flight testing is behind schedule, program decisions to reduce test points will enable the Navy to regain lost time and complete development testing in November 1998, as originally planned; (3) F/A-18E/F program documents identified numerous deficiencies relative to the aircraft's operational performance; (4) the most challenging technical issue is wing drop; (5) until these issues are resolved through software or hardware changes that have been adequately tested, the cost, schedule, and operational performance impact of resolving these deficiencies cannot be determined; (6) the Navy remains confident that it can correct these deficiencies; (7) in addition, a Navy board that assesses risk areas in the E/F program stated in July 1997, that operational testing may determine that the aircraft is not operationally effective or suitable; (8) a December 1997 preliminary operational assessment report, which is classified and based on limited data and analysis, identified 16 major deficiencies with the E/F aircraft but concluded that the F/A-18E/F is potentially operationally effective and suitable; (9) the Navy has consistently stated that the F/A-18E/F will be developed and produced within the cost estimates established for the program; (10) certain key assumptions on which the cost estimate was made have been overtaken by events; (11) program documents state that the current development effort is funded based on the assumption that problems would not occur during testing; (12) unanticipated aircraft deficiencies have occurred, and most of the program's management reserve has been depleted; (13) since the flight test program has about 1 year remaining, it is probable that additional deficiencies will develop; (14) correcting current and potential future deficiencies could result in the development effort exceeding the congressional cost cap; (15) the Navy's F/A-18E/F unit procurement cost estimates are understated; (16) these cost estimates were based on what has become unrealistically high quantities of E/F aircraft that will be bought; and (17) more realistic assumptions indicate that, although the total procurement cost will decrease, the F/A-18E/F unit cost will be more than the Navy currently estimates.
The federal government recognizes Indian tribes as distinct, independent political communities with inherent powers of self-government that include enacting substantive law over internal matters and enforcing that law in their own forums. The United States has a trust responsibility to federally recognized Indian tribes and maintains a government-to- government relationship with those tribes. The Bureau of Indian Affairs (BIA) within DOI provides law enforcement on Indian reservations unless tribes opt to assume responsibility for law enforcement or the state in which the reservation is located has criminal jurisdiction. Federal crimes such as illegally crossing the border or drug smuggling across the border fall under the authority of federal law enforcement whether they occur on Indian reservations or not. However, tribal law enforcement generally has the authority to arrest offenders on Indian reservations and detain them until they can be turned over to the proper authorities, even if the tribe itself lacks criminal jurisdiction. Further, tribal law enforcement officers can be cross-deputized to enforce federal laws. For example, ICE designated a tribal law enforcement officer with customs authority and this officer provides intelligence to ICE and assists with ICE investigations. DHS and its components have established a number of different offices to assist with facilitating tribal coordination on all homeland security issues, including border security. As shown in table 1, these components and offices have a variety of roles in supporting border security efforts on Indian reservations. Fusion centers, while not DHS components or offices, also support border security on Indian reservations by providing information to tribes. DHS and its components also have strategies, as shown in table 2, that help facilitate coordination between DHS and tribes to address border security on Indian reservations. The Border Patrol is coordinating and sharing information with tribes in a number of ways to address border security issues. The Border Patrol and six tribes reported using one or more of the following coordination methods: Operation Stonegarden—a DHS grant program intended to enhance coordination among local, tribal, territorial, state, and federal law enforcement agencies in securing United States borders—task forces such as BESTs and IBETs, fusion centers, tribal and public land liaisons, and joint operations and shared facilities to coordinate on border security. Border Patrol and tribal officials report that they share border security–related information through the BEST and IBET forums, and tribal officials reported receiving border security–related information from fusion centers. In addition, according to Border Patrol and tribal officials, the Border Patrol uses tribal or public lands liaisons to coordinate with tribes on border security. Another way Border Patrol and tribal officials said they coordinate on these issues is by tribal law enforcement using Operation Stonegarden funds to support daily coordination with the Border Patrol, including participating in joint patrols with the Border Patrol. In addition to these methods, the Border Patrol and tribes reported using joint operations, such as patrolling together in the same vehicles and using shared facilities, to coordinate on border security. Table 3 contains more detailed information regarding these coordination methods. In addition to these mechanisms, tribal and Border Patrol officials reported using other coordination methods, such as agent-to-tribal police officer interaction, meetings, and e-mails to coordinate on border security. For instance, although officials from two of the tribes in our review said their tribes do not use any of the coordination methods described in table 3, both tribes reported using meetings to coordinate as the need arises with the Border Patrol on border security issues. Officials from six of the eight tribes and 4 of the 10 Border Patrol stations we contacted reported that these methods of contact are the most beneficial for coordinating on border security. Officials from one tribe in our review reported that the timely sharing of information via e-mail is the tribe’s most important coordination mechanism with federal agencies. Officials from another tribe explained that leadership from the responsible Border Patrol Sector regularly calls the tribal chairman to discuss border security issues, as well as holding frequent meetings. Officials from four of the eight tribes and 4 of the 10 stations we interviewed also reported that they communicate daily with each other. Officials from some of the tribes and Border Patrol sectors and stations we contacted reported positive aspects of coordinating to address border security issues. Specifically, officials from five of the eight tribes we contacted reported having a good or effective relationship with the Border Patrol. In particular, officials from two of the tribes we reviewed, as well as the corresponding Border Patrol sectors and stations, reported that there are positive aspects of DHS’s overall coordination with the tribes to address border security threats. For example, officials from one of the tribes explained that tribal law enforcement officers have a good working relationship with the Border Patrol and that the Border Patrol is the best federal agency they have worked with in terms of coordinating with the tribe. Tribal officials cautioned that while the tribe has a good relationship with the Border Patrol, the majority of tribal community members do not want any Border Patrol presence on the reservation and that the tribal community is very mistrusting of nontribal entities, including law enforcement agencies. Border Patrol sector officials—who staff the sector responsible for border security on one of the two reservations—stated that in addition to productive monthly Border Patrol–tribal leadership meetings and daily interaction between Border Patrol agents and tribal law enforcement, the Border Patrol was the first federal law enforcement agency invited to speak at tribal schools and community meetings. Officials from one of the tribes in our review also reported that DHS and the Border Patrol at both the national and local levels are more sensitive to tribal concerns now than in the past and that the Border Patrol is willing to work with tribal law enforcement in sharing intelligence and keeping the lines of communication open. For instance, tribal officials explained that they have quarterly meetings with the Border Patrol sector during which the Border Patrol shares existing and future border security strategies with the tribe, including the decision of whether to deploy surveillance towers on the reservation. As a result of this interaction, the tribe, according to tribal officials, feels involved in the decision to potentially install towers on the reservation to help monitor and better secure the border. In the past, these types of decisions would have occurred without consulting the tribe, according to tribal officials. Border Patrol sector officials—who staff the sector responsible for border security on the Indian reservation—stated that the sector has never enjoyed a better level of communication or mutual understanding with the tribe and much of this can be attributed to the level of coordination with the tribe, particularly the regular meetings held between Border Patrol agents and tribal officials. Although Border Patrol and tribal officials reported positive aspects of coordination, officials from seven of the eight tribes we contacted reported coordination challenges related to border security. According to tribal officials, the Border Patrol does not consistently communicate to the tribes information that would be useful in tribal law enforcement efforts to assist in securing the border. Specifically, officials from five of the eight tribes we reviewed reported coordination challenges related to not receiving notification and information from federal agencies, including the Border Patrol, regarding federal law enforcement activity on their respective reservations. The following examples illustrate these coordination challenges. Officials from one of the tribes in our review reported that they are not given advance notification of Border Patrol law enforcement actions, such as independently patrolling the reservation or the deployment of undercover surveillance teams, occurring on their reservation. These officials reported that they would be in a better position to support federal agencies with border security efforts if they received information regarding planned federal law enforcement actions in a more timely manner. Border Patrol officials from the sector stated that the Border Patrol notifies tribal law enforcement of its own operations, as well as joint operations, which often involve tribal law enforcement, on the reservation. However, the Border Patrol does not provide detailed information on its patrol schedule and dates and times of operations, among other enforcement activities, to non-law- enforcement entities. A tribal official from another Indian reservation stated that there are numerous law enforcement agencies with different enforcement objectives working on the reservation and that there have been a few instances in which a tribal law enforcement unit and another federal agency were tracking the same suspects unaware of each other’s presence. These situations, according to tribal officials, were problematic because the agencies were concerned that the overall operation would fail because of the lack of notification by each agency of its respective operations. Although a Border Patrol official with border security responsibilities on this Indian reservation was not aware of the Border Patrol being involved in such incidents, according to tribal officials, when tribal officials and the Border Patrol work together, they can complement each other and act as force multipliers by utilizing their respective resources. We have previously reported on the importance of deconfliction and coordinating to prevent law enforcement entities from unknowingly interrupting each other or duplicating each other’s efforts. Moreover, CBP reports that in some areas along the border, surveillance and response capabilities are limited, so the success of its border security initiatives depends on leveraging intelligence and partnerships with federal, state, local, and tribal governments. Officials from a third tribe in our review reported that although the tribe provides information to federal agencies, these agencies do not consistently provide information, particularly information related to tribal members, to the tribe. For instance, in 2009, Border Patrol and a county sheriff’s deputy responded to an incident involving two individuals who tried to illegally cross the border on tribal lands. Although the tribe was conducting operations in the area and could have responded to this incident, tribal officials stated that they did not receive information about the illegal crossing from the Border Patrol. Border Patrol officials from the sector with responsibility for this Indian reservation were not able to confirm Border Patrol involvement in this incident. Further, according to Border Patrol officials, in some cases, coordination challenges with tribes have affected the Border Patrol’s ability to patrol and monitor the border so as to prevent and detect illegal immigration and smuggling. Border Patrol officials from three of the seven Border Patrol sectors and 5 of the 10 stations we contacted reported coordination challenges related to understanding and collaborating with tribes within tribal government rules. Specifically, officials from two sectors that include Indian reservations and corresponding stations reported coordination challenges related to tribal government rules that hindered law enforcement in working together to secure the border. Border Patrol officials from one of the sectors with border security responsibilities on an Indian reservation in our review stated that the reservation faces border threats and is vulnerable, in part, because the Border Patrol cannot patrol as frequently as it would like to on the reservation. The Border Patrol is limited, because of tribal decisions, in the type of border security enforcement, particularly the implementation of visible countermeasures, such as mobile surveillance systems or integrated fixed towers, it can implement on the reservation, according to these Border Patrol officials. Further, these Border Patrol officials stated that some tribal members are opposed to the Border Patrol’s presence on the reservation, which, because of the potential for volatile protests by these tribal members, impedes the Border Patrol’s ability to patrol certain areas of the reservation, including a road in a major smuggling area. As a result of these issues, Border Patrol officials reported that the Border Patrol cannot apply all of its capabilities, particularly technology, to address border security threats and vulnerabilities on the reservation. Tribal officials from this Indian reservation stated that although the Border Patrol is not permitted to implement border security technologies on the reservation because of tribal community preferences, the Border Patrol is able to implement technologies and checkpoints just off of the reservation. Border Patrol headquarters officials stated that the implementation of these countermeasures off of instead of on the reservation adjacent to the border hampers the Border Patrol’s ability to secure the border. Border Patrol officials from a sector with an Indian reservation reported that the tribe has negotiated with the Border Patrol via its tribal resolution process and other means to limit the tactical infrastructure the Border Patrol sector uses to support the border security mission on the reservation. For example, the Border Patrol is limited in the deployment of tactical checkpoints and must negotiate regarding the deployment location of vehicle-mounted radar systems.mounted radar system had to be moved to a tactically less According to the Border Patrol, in one case, a vehicle- advantageous position because of tribal concerns over its location on a sacred mountain. According to the Border Patrol, the tribal resolution process for gaining approval from the tribe to implement border security countermeasures is difficult to navigate, which significantly affects the Border Patrol’s ability to quickly respond to threats, and reduces the Border Patrol’s presence on the border. The tribal resolution process, according to Border Patrol officials, includes several steps for soliciting feedback and approval for all proposed Border Patrol actions from all of the tribe’s districts and communities. Border Patrol sector and station officials expressed concerns about individual community members, including those possibly involved in cross-border crime, being able to prevent passage of the resolution. These officials also stated that the tribe has changed the approval process without communicating these changes to the Border Patrol, which makes it difficult for the Border Patrol to adapt to the changes for both new projects and projects already under consideration by the tribe. Tribal officials stated that the Border Patrol established temporary camps on its own initiative without gaining approval from the tribal real estate office, so the tribal officials had the Border Patrol remove the However, these officials acknowledged that the resolution camps.process is lengthy and can be tedious for Border Patrol officials, particularly since the Border Patrol has deadlines it must meet to receive funding for projects. They also recognized that some of the tribal districts were not familiar with the steps required by the resolution process. As a result, tribal officials have established a tribal committee to ensure the districts and the Border Patrol better understand the approval process. Given these coordination challenges, written agreements between the Border Patrol and tribes could provide a mechanism to help resolve coordination issues, such as the tribes’ lack of notification and information from federal agencies regarding law enforcement activity on their reservations, when they emerge. We have previously reported on practices that can enhance and sustain effective collaboration, such as establishing common standards, policies, or procedures to use in collaborative efforts and the development of written agreements to document collaboration. We reported that as agencies bring diverse cultures to the collaborative effort, it is important to address these differences to enable a cohesive working relationship and to create the mutual trust required to enhance and sustain the collaborative effort. Regarding the use of written agreements to document collaborative efforts, we have reported on the utility and benefits of written government- to-government agreements between U.S. government agencies and foreign governments or other sovereign entities to improve cooperation.These agreements, in part, provide a legal framework for improving partnerships, facilitate information exchange, define tasks to be accomplished by each entity, and establish written assurances of each entity’s commitments. A government-to-government agreement could help DHS and tribal governments to come together as partners to establish complementary goals and strategies for achieving shared results in securing the border on tribal lands. Border Patrol headquarters officials reported that they have considered the potential utility and benefits of written government-to-government agreements with individual tribes to address border security challenges. In addition, DHS has entered into memorandums of agreements (MOA) with individual tribes on other security-related issues, which have benefited DHS and the tribes. For example, DHS entered into MOAs with individual tribes regarding the implementation of the Enhanced Tribal Card, which is a DHS program that allows all federally recognized tribes to work with CBP to produce a card denoting citizenship and identity that can be accepted for entry at the POEs. A DHS official from CBP’s Land Border Integration Project Management Office responsible for negotiating these MOAs with the tribes reported that the MOAs were designed to protect tribal sovereignty, as well as describe the steps the tribes must take to produce a card. These MOAs, according to this official, are binding and protect both the tribes and CBP from expending resources on developing the card without assurances that the card will meet the requirements of the program. Both Border Patrol and tribal officials reported that a written government- to-government agreement could benefit their border security coordination. Border Patrol sector officials responsible for border security on one of the reservations stated that the establishment of such an agreement explicitly describing the steps required to obtain approval for Border Patrol actions, including the tribe’s resolution process and mechanisms for notifying the Border Patrol when changes are made to the process or approval requirements, could help resolve challenges for the Border Patrol in coordinating with the tribe. Tribal officials also reported that a government-to-government agreement could assist with resolving remaining coordination challenges by supporting overall coordination and ensuring that coordination processes are followed. Officials from another tribe in our review stated that they would also be receptive to an agreement that shows respect for the tribe and its practices, is developed with tribal participation, and involves senior DHS officials with negotiating capabilities. Border Patrol sector officials with responsibility for another of the reservations in our review stated that they considered pursuing an agreement with the tribe, but decided instead to actively engage tribal council officials, law enforcement officers, and community members in resolving issues, a course of action that has, according to Border Patrol officials, been effective in gaining the support of tribal leadership and law enforcement. The Border Patrol sector officials noted, though, that if agreements with tribal officials were pursued, senior DHS officials would need to be involved in the negotiation of any government-to-government agreements because tribal leadership officials do not view the Border Patrol, as a law enforcement agency, as the appropriate federal government representative for negotiating these types of agreements. Officials from both of these tribes emphasized the importance of tribal sovereignty and the need for the federal government to interact with tribes on a government-to-government basis. An assessment of the utility of written, government-to-government agreements between DHS and individual tribal governments to address and mitigate specific coordination challenges, particularly for tribes facing border security threats, could help DHS build on its tribal partnerships. Further, agreements that are tailored to help resolve specific challenges, such as not receiving notification and information from federal agencies regarding federal law enforcement activity on the tribes’ respective reservations could bring greater transparency to tribal government rules for the Border Patrol. Utilizing written agreements to help ensure the partners are working together to secure the borders could better position the Border Patrol and tribes to address their coordination challenges. DHS IGA and Tribal Desk officials reported that they have taken various actions to coordinate with tribes on a range of homeland security–related issues, including border security. For instance, DHS components, including the Border Patrol, have tribal liaisons who manage their components’ tribal outreach efforts. The Tribal Desk, which is responsible for coordinating tribal consultation and outreach with the component liaisons, holds monthly teleconferences with these liaisons to discuss tribal issues and programs, according to IGA and Tribal Desk officials. DHS also has a Tribal Consultation Policy that outlines the guiding principles under which DHS engages with the tribal governments. DHS, according to DHS IGA and Tribal Desk officials, disseminated this policy to all federally recognized tribes and presented the policy at national tribal conferences. DHS’s Tribal Desk, according to DHS IGA and Tribal Desk officials, is working with the tribes daily to address tribal issues and improve its tribal partnerships. However, DHS IGA and Tribal Desk officials reported that the Tribal Desk does not have oversight of the components’ tribal outreach efforts, including border security coordination, because their role is one of a coordination mechanism. The Tribal Desk is aware of the components’ outreach to the tribes, but it does not have the authority to track the effectiveness of such outreach to determine if the outreach is occurring and if any changes to outreach efforts are needed. According to DHS IGA and Tribal Desk officials, each component, including CBP, is responsible for conducting its own tribal outreach and is only required to report to the leadership of its respective components and is not required to report to the Tribal Desk on its coordination efforts. As a result, there is no department-wide oversight mechanism for ensuring the effectiveness of components’ border security coordination with the tribes. According to Standards for Internal Control in the Federal Government, controls should generally be designed to ensure that ongoing monitoring occurs in the course of normal operations and assesses the quality of performance over time. Such monitoring should be performed continually; ingrained in the agency’s operations; and clearly documented in directives, policies, or manuals to help ensure operations are carried out as intended. We have also previously reported that federal agencies can enhance and sustain collaborative efforts by, in part, developing oversight mechanisms—or mechanisms to monitor and evaluate their results—to identify areas for improvement. Oversight mechanisms can assist with reinforcing agency accountability for its collaborative efforts. DHS, in accordance with a 2009 Presidential Memorandum on tribal consultation, developed an Action Plan and corresponding Progress Report in 2010 that described various action items designed to establish regular and meaningful collaboration with tribal officials, and to monitor at the department level tribal partnerships to protect the safety and security of all people on tribal lands and throughout the nation. The 2009 memorandum requires all federal agencies to submit to OMB a detailed action plan of the steps the agency will take to ensure meaningful and timely input by tribal officials in the development of regulatory policies that have tribal implications. As DHS was formulating the Action Plan, tribes recommended, among other things, that DHS develop accountability and tracking mechanisms to ensure that the agency is responding to issues that are raised through tribal consultation. The Action Plan and its 2010 Progress Report call for the implementation of various action items designed to monitor and oversee DHS’s tribal coordination efforts at the department level, including appointing a Senior Advisor for tribal affairs to provide policy advice and leadership on tribal issues and determining the feasibility and usefulness of establishing an internal leadership advisory council on tribal affairs. According to the Action Plan, this intra-agency council, staffed by DHS IGA and composed of officials from the department and components, would provide ongoing advice to the Secretary of Homeland Security on issues and policies that affect tribes, including border security, as well as bringing together DHS leadership from across the department’s divisions and components to ensure consistency on policies affecting tribes. According to DHS officials, while DHS took steps to hire a Senior Advisor, the position was ultimately not sustainable because of staff turnover and a lack of funding for the position. DHS officials further noted that the position of Director of Tribal Affairs within the Intergovernmental Affairs office was established to help fulfill this role. Additionally, DHS officials reported that they did not establish an advisory council because of personnel limitations, among other issues. The implementation of such action items, or another oversight and monitoring mechanism, could better position DHS to assess the effectiveness of partnerships with tribes at the department level. We have identified coordination challenges related to border security since the establishment of the Action Plan by DHS. For example, officials from seven of the eight tribes we contacted reported coordination challenges related to border security, such as the Border Patrol’s lack of consistent communication of border security–related information with the tribes. As DHS was developing its Action Plan, it received feedback from tribes regarding the need to establish accountability and tracking mechanisms to ensure that DHS is responding to issues raised by tribes. For example, in summarizing feedback received from tribes, DHS noted in the Action Plan that tribal leaders expressed frustration regarding the expenditure of significant time and resources engaging with a federal agency only to see very little response or consideration of tribal recommendations. However, DHS does not have a mechanism to monitor and provide accountability for coordination efforts, as suggested by the tribes and the Action Plan, to position DHS to, for example, identify departmental and component coordination successes as well as areas needing improvement, including addressing coordination challenges that have remained since DHS obtained feedback from tribes in developing the plan. An oversight mechanism, such as one or more of those identified in DHS’s Action Plan, could help identify and address these coordination challenges as well as determine which coordination efforts work well. Further, such a mechanism could help DHS enhance its awareness of and accountability for components’ border security coordination efforts with the tribes and better look across the department to determine the progress being made and the improvements needed to more effectively coordinate border security with the tribes. The nature and complexity of Indian reservations on or near the border, along with the vulnerabilities and threats they face, highlight the importance of DHS and tribes working together to enhance border security. The Border Patrol, in particular, is coordinating and sharing information with tribes in a variety of ways to address border security on Indian reservations. However, these coordination efforts could be strengthened. Government-to-government agreements with tribes to address specific challenges, such as federal agency notification to tribes of law enforcement actions occurring on the reservation, that have emerged between the Border Patrol and individual tribes could help better position the Border Patrol and the tribes to resolve their coordination challenges and better work together to secure the border. Further, DHS does not have a mechanism to monitor and provide oversight for its tribal coordination efforts—including border security—that would allow the agency to hold components accountable for effective coordination and, as a result, is not well positioned to identify areas of coordination needing improvement. We have reported on the importance of monitoring and oversight for sustaining and enhancing collaboration, and DHS’s Action Plan contains action items designed, in part, to assist with its monitoring and oversight of its tribal partnerships. A monitoring and oversight mechanism could yield additional information and insights on the effectiveness of DHS’s coordination with tribes, as well as help reinforce accountability when coordinating to address border security issues. To enhance DHS-tribal coordination on border security on Indian reservations, including DHS’s monitoring and oversight of these coordination efforts, we recommend that the Secretary of Homeland Security take the following two actions: examine, or direct CBP to examine, as appropriate, the potential benefits of government-to-government written agreements with tribes facing border security threats, and develop and implement a mechanism to monitor DHS’s department- wide border security coordination efforts with tribes. We provided a draft of this report to DHS, DOJ and DOI for comment. We received written comments from DHS on the draft report, which are summarized below and reproduced in full in appendix I. DHS concurred with both recommendations. DOJ and DOI did not provide written comments to include in this report. DOJ provided technical comments via an e-mail received on December 7, 2012, which we incorporated as appropriate. DOI provided oral technical comments on December 7, 2012, which we incorporated as appropriate. Regarding the first recommendation, that DHS examine or direct CBP to examine, as appropriate, the potential benefits of government-to- government written agreements with tribes facing border security threats, DHS concurred. DHS stated that more formalized government-to- government agreements between CBP and tribal nations should be developed for substantive issues. DHS further noted that written agreements, subject to legal review prior to signature, will memorialize both the issues and solutions. DHS stated that the DHS Intergovernmental Affairs office will work with CBP in the coming year to determine how the recommendation can be implemented. We will continue to monitor DHS’s efforts. Regarding the second recommendation, that DHS develop and implement a mechanism to monitor DHS’s department-wide border security coordination with tribes, DHS concurred. DHS agreed that developing an agency-wide program could further enhance the interests of the tribes and the department for border security and many other programs. DHS stated that, in consultation with tribes, it will convene an internal group to discuss the feasibility of establishing a permanent program or an intra-agency oversight committee to address border security and other issues related to interaction and program delivery with tribes. This action, if implemented effectively, should address the intent of the recommendation. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Rebecca Gambler, (202) 512-8777 or [email protected] In addition to the contact named above, Dawn Locke (Assistant Director), David Alexander, Frances Cook, Kevin Copping, Corey Guilmette, Eric Hauswirth, Linda Miller, John Mingus, Robin Nye, Jessica Orr, and Jerry Sandau made key contributions to this report.
Individuals seeking to enter the United States illegally may attempt to avoid screening procedures at ports of entry by crossing the border in areas between these ports, including Indian reservations, many of which have been vulnerable to illicit cross-border threat activity, such as drug smuggling, according to DHS. GAO was asked to review DHS's efforts to coordinate border security activities on Indian reservations. This report examines DHS's efforts to coordinate with tribal governments to address border security threats and vulnerabilities on Indian reservations. GAO interviewed DHS officials at headquarters and conducted interviews with eight tribes, selected based on factors such as proximity to the border, and the corresponding DHS field offices that have a role in border security for these Indian reservations. While GAO cannot generalize its results from these interviews to all Indian reservations and field offices along the border, they provide examples of border security coordination issues. This is a public version of a sensitive report that GAO issued in December 2012. Information that DHS, the Department of Justice (DOJ) and the Department of the Interior (DOI) deemed sensitive has been redacted. The Department of Homeland Security (DHS) is coordinating in a variety of ways with tribes, such as through joint operations and shared facilities and Operation Stonegarden--a DHS grant program intended to enhance coordination among local, tribal, territorial, state, and federal law enforcement agencies in securing United States borders. However, the Border Patrol and tribes face coordination challenges. Officials from five tribes reported information-sharing challenges with the Border Patrol, such as not receiving notification of federal activity on their lands. Border Patrol officials reported challenges navigating tribal rules and decisions. Border Patrol and DHS have existing agreements with some, but not all, tribes to address specific border security issues, such as for the establishment of a law enforcement center on tribal lands. These agreements could serve as models for developing additional agreements between the Border Patrol and other tribes on their specific border security coordination challenges. Written government-to-government agreements could assist Border Patrol and tribal officials with enhancing their coordination, consistent with practices for sustaining effective coordination. DHS established an office to coordinate the components' tribal outreach efforts, which has taken actions such as monthly teleconferences with DHS tribal liaisons to discuss tribal issues and programs, but does not have a mechanism for monitoring and overseeing outreach efforts, consistent with internal control standards. Such monitoring should be performed continually; ingrained in the agency's operations; and clearly documented in directives, policies, or manuals to help ensure operations are carried out as intended. Implementing an oversight mechanism could help enhance DHS's department-wide awareness of and accountability for border security coordination efforts with the tribes while identifying those areas that work well and any needing improvement. GAO recommends that DHS examine the benefits of government-to-government agreements with tribes and develop and implement a mechanism to monitor border security coordination efforts with tribes. DHS concurred with our recommendations.
As of June 30, 2001, Amex was the third-largest U.S. market in terms of the number of companies whose common stock it listed. With the common stock of 704 companies listed, Amex trailed only Nasdaq, which had 4,378 listings, and NYSE, which had 2,814 listings. Overall, about 98 percent of the common stocks listed on U.S. markets were listed on Amex, Nasdaq, or NYSE. The remaining markets had significantly fewer listings. For example, the fourth-largest market in terms of the number of companies listed was the Boston Stock Exchange, with 84 listings, 46 of which were also listed on Nasdaq. In 1998, the National Association of Securities Dealers (NASD), which also owns and operates Nasdaq, purchased Amex. Although Amex retained its independence as an exchange, in July 1999 its equity listing program was moved from New York City to Gaithersburg, Maryland, and integrated with the Nasdaq listing program. In June 2000, NASD completed the first phase of its plan to restructure Nasdaq as a stand-alone stock-based organization. According to Amex officials, as a result of this restructuring, the Amex equity listing department began moving back to New York in November 2000, and the move was completed about 6 months later. Under federal law and consistent with its responsibilities as an SRO, each U.S. market establishes and implements the rules that govern equity listings in its market with the intent of maintaining the quality of the markets and public confidence in them. In general, a company applies to have its stock listed for trading in a specific market, subject to that market’s rules. This process includes submitting an application for review, together with supporting information such as financial statements, a prospectus, a proxy statement, and relevant share distribution information. As part of making an initial listing decision, the market’s equity listing department reviews these submissions for compliance with its listing requirements and conducts background checks of company officers and other insiders. The equity listing department will also monitor companies for compliance with the market’s continued listing requirements and, in accordance with the market’s rules, will take action when these requirements are not met. SEC’s oversight of a market’s equity listing requirements includes reviewing the SRO’s proposed rules to ensure that they are consistent with the requirements of the Securities and Exchange Act of 1934. These rules, which make up the market’s initial and continued equity listing guidelines or standards, must be approved by SEC and can be changed only with SEC’s approval. SEC also reviews the SRO’s listing decisions, either on appeal or by its own initiative, and SEC’s OCIE periodically inspects the SRO’s listing program to ensure compliance with the market’s listing requirements. In all U.S. markets, quantitative and qualitative listing requirements for equities have generally addressed the same or similar factors. Two aspects of the quantitative listing requirements are noteworthy. First, the minimum thresholds for meeting them varied according to the characteristics of the companies the markets sought to attract. Second, initial listing requirements were generally higher than continued listing requirements. Qualitative listing requirements addressed corporate governance and other factors. The most significant difference between the equity listing requirements of Amex and those of other U.S. stock markets was that Amex was one of only two markets that retained the discretion to initially list companies that did not meet all of its quantitative requirements. Amex’s quantitative initial listing guidelines for equities have generally addressed factors that are the same as or similar to those addressed by the initial listing standards of the other U.S. stock markets, including factors such as minimum share price, stockholders’ equity, income, market value of publicly held shares, and number of shareholders. However, the minimum thresholds for meeting the requirements of each market have varied to reflect the differences in the characteristics—such as size—of the companies that each market targeted for listing. For example, Amex has marketed itself as a niche market designed to give growth companies access to capital and to the markets. A company could qualify for initial listing on Amex under one of two alternatives. Under both alternatives, a company was required to have a minimum share price of $3 and minimum stockholders’ equity of $4 million (see table 1). In addition, under one alternative, a company could qualify for listing with no pretax income, a minimum market value of publicly held shares of $15 million, and a 2-year operating history. Under the other alternative, a company was required to have minimum pretax income of $750,000, either in the latest fiscal year or in 2 of the most recent 3 fiscal years, and a minimum market value of publicly held shares of $3 million. The Nasdaq SmallCap Market focused on smaller companies that were generally similar in size to those listed on Amex, and its listing standards and minimum thresholds were similar to Amex’s. To be eligible for listing on the Nasdaq SmallCap Market, a company was required to have, among other things, a minimum share price of $4, a minimum market value of publicly held shares of $5 million, a 1-year operating history, and either a minimum net income of $750,000 in the latest fiscal year or in 2 of the most recent 3 fiscal years, or $5 million of stockholders’ equity. Alternatively, if the company did not meet the operating history, income, or equity requirements, the minimum market value of all shares was required to be $50 million. In contrast to Amex and the Nasdaq SmallCap Market, the Nasdaq National Market and NYSE targeted larger companies, and their listing standards had higher minimum thresholds. For example, the Nasdaq National Market required in part that listing companies have a minimum of $1 million in pretax income in the latest fiscal year or in 2 of the 3 most recent fiscal years, along with a minimum market value of publicly held shares of $8 million, depending on the listing alternative. In comparison, NYSE required a company to have, among other things, a minimum total pretax income of $6.5 million for the most recent 3 years and a minimum market value of publicly held shares of $60 million or $100 million, depending on the listing alternative. The quantitative continued listing requirements (the minimum thresholds that listed companies must maintain to continue to be listed) were generally lower than those for the initial listing requirements (see table 1). For example, although Amex’s initial listing guidelines required, under one alternative, that a company have at least $4 million of stockholders’ equity and $750,000 in pretax income, a company could remain in compliance with the continued listing guidelines even if it had losses in 3 of the last 4 years (beginning with its listing date), provided that it maintained $4 million in stockholders’ equity. Such differences between initial and continued listing requirements were typical of all the U.S. markets. The qualitative listing requirements for equities in all U.S. markets addressed corporate governance requirements as well as various other factors. Corporate governance requirements are generally concerned with the independence of corporate management and boards of directors, as well as with the involvement of shareholders in corporate affairs. These requirements address such factors as conflicts of interest by corporate insiders, the composition of the audit committee, shareholder approval of certain corporate actions, annual meetings of shareholders, the solicitation of proxies, and the distribution of annual reports. U.S. markets may also consider various other qualitative factors when considering a company for listing. These factors are inherently subjective and are not subject to comparison among markets. For example, Amex’s guidelines stated that even though a company may meet all of the exchange’s quantitative requirements, it may not be eligible for listing if it produces a single product or line of products, engages in a single service, or sells products or services to a limited number of companies. In addition, in making a listing decision, Amex would consider such qualitative factors as the nature of a company’s business, the market for its products, the reputation of its management, and the history or recorded pattern of its growth, as well as the company’s financial integrity, demonstrated earning power, and future outlook. Although all U.S. markets had rules giving them the discretion to apply additional or more stringent requirements in making an initial or continued listing decision, only Amex and Nasdaq retained the discretion to initially list companies that did not meet their quantitative requirements. The Amex listing guidelines stated that the exchange’s quantitative guidelines are considered in evaluating listing eligibility but that other factors are also considered. As a result, Amex might approve a listing application even if the company did not meet all the exchange’s quantitative guidelines. Amex believed that it was important for the exchange to retain discretion to approve securities for initial listing that did not fully satisfy each of its quantitative requirements because it would be impossible to include every relevant factor in the guidelines, especially in an evolving marketplace. As of September 7, 2001, Amex had not agreed to implement OCIE’s recommendations related to the exchange’s use of its discretion in making listing decisions. Amex was unwilling to relinquish its discretionary authority or to modify its stock symbols to address OCIE’s concerns. OCIE officials told us that if these recommendations were not addressed, OCIE would include them among the open significant recommendations that are to be reported annually to the SEC Commissioners. OCIE reported in April 2001 that the Amex listing department was generally thorough in its financial and regulatory reviews of companies seeking to be listed on the exchange. However, OCIE also reported that Amex was using its discretionary authority more often than was appropriate to approve initial listings that did not meet the exchange’s quantitative guidelines, and that it did so without providing sufficient disclosure to the investing public. OCIE reported that the percentage of companies Amex listed that did not meet the exchange’s initial quantitative guidelines increased from approximately 9 percent for the 20 months between January 1, 1998, and August 31, 1999, to approximately 22 percent for the subsequent 14.5 months ending on November 13, 2000. OCIE noted that although Amex’s listing guidelines are discretionary, investors rightfully presume that the companies listed on Amex generally meet its quantitative and qualitative guidelines. In response to concerns that the investing public was not receiving sufficient information about the eligibility of companies to trade on Amex, OCIE recommended that Amex amend its rules to provide mandatory initial quantitative listing requirements. Until the mandatory listing requirements are in place, OCIE recommended that Amex provide some form of public disclosure to identify companies that do not meet its initial listing guidelines. For example, Amex could attach a modifier to the trading symbols of these companies. The report indicated that another alternative would be to issue a press release each time Amex lists a company that does not meet its quantitative guidelines. However, OCIE officials said that a press release was not the preferred form of public disclosure because it was a one-time occurrence, while a symbol modifier would accompany a listing until the company complied with Amex listing requirements. OCIE also expressed concerns about Amex’s use of its discretionary authority in making continued listing decisions. The concerns it raised in its April 2001 inspection report were similar to those raised in a 1997 report. In both reports, OCIE concluded that Amex did not identify noncompliant companies in a timely manner and that it deferred delisting actions for too long and without good cause. In addition to citing lapses in Amex’s timely identification of companies that did not meet its continued listing guidelines, OCIE reported in 2001 that for 5 of 34 companies reviewed, or 15 percent, Amex either granted excessive delisting deferrals or did not begin delisting proceedings in a timely manner. Also, we learned from Amex that 71 companies—about 10 percent of the exchange’s 704 listings—did not meet all aspects of its continued listing guidelines as of July 31, 2001. Of these, 12 companies had been out of compliance with its guidelines for more than 2 years, and 20 companies had been out of compliance for between 1 and 2 years (see table 2). In addition, under a November 2000 Amex rule change, listed companies were required to issue a press release to inform current and potential investors when Amex notified the companies of a pending delisting decision. According to Amex, the exchange had sent notices to 18 companies of potential delisting between the time of the rule change and August 30, 2001. Amex informed us that these companies had not been in full compliance with the continued listing guidelines for an average of about 6.5 months before receiving the notice. In response to the concerns OCIE expressed in 1997 about Amex deferring delisting action without good cause, the exchange agreed to review on a quarterly basis the status of companies that did not meet its continued listing standards and to document its rationale for allowing noncompliant companies to remain listed. OCIE believed that by more closely scrutinizing the actions that companies were taking to comply with the exchange’s continued listing guidelines, Amex would be more likely to delist companies that were noncompliant for excessive periods. However, OCIE found in its most recent inspection that although Amex had performed the agreed-upon quarterly reviews, the exchange was still not taking timely action to delist noncompliant companies. OCIE recommended in its 2001 inspection report, as it had in its 1997 report, that Amex identify in a more timely manner the companies that did not comply with its continued listing guidelines, grant delisting deferrals to noncompliant companies only if the companies could show that a reasonable basis existed for assuming they would return to compliance with the listing guidelines, document reviews of each company’s progress in coming into compliance with the listing guidelines, and place firm time limits on the length of delisting deferrals. The report also recommended that Amex append a modifier to the company’s listing symbol or devise an alternative means of disclosure to denote that a company was not in compliance with Amex’s continued listing guidelines. As of September 7, 2001, OCIE and Amex were in ongoing discussions about the actions Amex would take to address OCIE’s recommendations. However, in responding to OCIE’s 2001 inspection report and in subsequent discussions with OCIE officials, Amex indicated that it did not want to relinquish its discretionary authority or to modify its stock symbols. Amex stressed the importance of being able to evaluate a company’s suitability for listing on a case-by-case basis. The exchange further responded that its published listing policies put potential investors on notice that Amex would evaluate an applicant based on a myriad of factors and might approve companies for listing that did not meet all of its quantitative guidelines. In addition, Amex cited the November 2000 rule change under which companies are required to issue a press release to inform investors of a pending delisting decision. Amex officials also told us that investors could obtain sufficient information about a company’s operating condition from other public sources, obviating the need for a stock symbol modifier or other public notice. OCIE officials said that they believed additional disclosure to the investing public would be necessary until Amex turned its equity guidelines into firm standards. The officials remained concerned that individual investors were unaware that Amex’s listing guidelines provided broad discretion in making listing decisions. They emphasized that they were concerned about Amex’s discretion to list companies that did not meet its quantitative guidelines, stressing that they did not want to remove Amex’s discretion to apply additional or more stringent requirements in making listing decisions. Further, although the OCIE report acknowledged that alternative disclosure mechanisms existed, OCIE officials said that attaching a modifier to a stock’s listing symbol to indicate that a stock did not meet either the initial or continued listing standards would provide the broadest and therefore most preferred type of disclosure. For example, a company’s press release making public a delisting decision would not be a preferred form of disclosure because, depending on the circumstances, a company could remain out of compliance with Amex’s continued listing requirements for months or years without being subject to a delisting decision. To address this concern, NYSE requires a company to issue a press release when the exchange notifies the company that it does not meet the continued listing requirements. Nonetheless, a press release is a one-time notice and, as such, may limit potential investors’ awareness of a company’s listing status. Amex also expressed concern that OCIE was imposing strict requirements on its market that would not be applicable to other markets. Amex specifically noted that neither the Nasdaq National Market nor NYSE appended a symbol to listed securities that did not meet their continued listing requirements. Amex officials told us that requiring Amex to do so could mislead investors into believing that other markets do not follow listing practices similar to those of Amex. Amex also said that a modifier would place an unwarranted negative label on the company and send an inappropriate message to the market. As noted above, companies listed on Amex have more closely resembled those listed on the Nasdaq SmallCap Market than those listed on the Nasdaq National Market. According to a Nasdaq official, the Nasdaq SmallCap Market has used a modified listing symbol for all companies that fall below its continued listing requirements since the market began operating in 1982, and 10 stocks had modified symbols as of August 15, 2001. Nonetheless, OCIE officials said that they are in the process of inspecting the listing programs at Nasdaq and NYSE and would, if they determined that companies were listed that did not meet the markets’ equity listing standards, recommend that stock symbol modifiers be used to identify such companies. Finally, Amex said that a November 2000 rule change, as well as significant staffing changes that include a new department head, were having the effect of reducing the number of stocks approved for listing that did not meet the exchange’s quantitative guidelines. According to Amex, from November 1, 2000, through August 27, 2001, 6 of the 39 new listings—approximately 15 percent—were granted exemptions to the exchange’s quantitative listing guidelines. Five companies were approved for listing based on an appeal to the Committee on Securities, and one company was approved by the listing department staff because it had “substantially” met all of the exchange’s initial listing guidelines. According to Amex, the determination of substantial compliance was based on the fact that the applicant had met all the exchange’s guidelines, except that the company’s price at the time of approval was $2.9375, instead of the $3.00 minimum required by the guidelines. As discussed earlier, OCIE had found that 22 percent of new listings for a prior period had been granted exemptions. Amex officials said that they expected the downward trend to continue in the number of stocks approved for listing that did not meet the exchange’s quantitative guidelines. OCIE officials told us that they had considered the changes to the Amex listing program in making their recommendations. In a 1998 report, we recommended that the SEC Chairman require OCIE to report periodically on the status of all open, significant recommendations to the SEC Commissioners. Our rationale was that involving the Commissioners in following up on recommendations would provide them with information on the status of corrective actions that OCIE had deemed significant. Also, because the Commissioners have the authority to require the SROs to implement the staff’s recommendations, reporting to them would provide the SROs with an additional incentive to implement these recommendations. After preparing its first annual report in August 1998, including both significant recommendations on which action had been agreed to but not completed and recommendations that had been rejected, OCIE determined that future reports would include only the status of significant recommendations that an SRO had expressly declined to adopt or had failed to adequately address. Reflecting the seriousness of their concerns about the open recommendations related to Amex’s use of its discretionary authority in making initial and continued listing decisions, OCIE officials told us that in the absence of an Amex agreement to adequately address these recommendations, OCIE would include them among the open significant recommendations to be reported annually to the SEC Commissioners. Amex officials told us that the exchange was fulfilling its SRO responsibilities related to its equity listing operations in part by individually monitoring the status of companies that did not meet its continued listing guidelines and, beginning in January 2001, by summarizing related information in monthly reports to management. These monthly reports provided information on the output of the department’s activities, including the names and total number of companies that did not meet the continued listing guidelines, the reasons that individual companies did not meet the guidelines, the date of the latest conference with each company to discuss its listing status, the total number of such conferences held, and the total number of decisions made on the basis of these conferences. The Amex listing department did not, however, prepare management reports that aggregated and analyzed overall statistics to measure program results over time. As a result, Amex could not demonstrate the effectiveness of its exceptions-granting policies or its initial and continued listing guidelines. For example, Amex did not routinely aggregate or analyze statistics on the percentage of applicants listed that were granted exceptions to initial or continued listing guidelines, or on the length of time that companies were not in compliance with the continued listing guidelines and their progress in coming back into compliance with them. Collecting and analyzing such data over time, especially in conjunction with the outcomes for these companies—whether they achieved compliance or were delisted—could provide Amex and OCIE with an indicator of the effectiveness of Amex’s process for granting exceptions. Analysis of this information could also help Amex and OCIE determine whether a significant difference exists between the outcomes for companies that meet the listing guidelines and those that do not. Also, although Amex told OCIE that it continually “monitors” to determine whether its guidelines need to be revised, Amex did not develop and aggregate statistics on the number of companies delisted or on the reasons for delistings, such as noncompliance with listing requirements or a move to another market. As indicated above, Amex provided us with some of this information in response to a specific request but also told us that the listing department did not routinely aggregate such information for management purposes. Collected and analyzed over time, this information could provide Amex and OCIE with an indicator of the effectiveness of Amex’s initial and continued listing guidelines and, therefore, could be useful in identifying appropriate revisions to them. Other markets have developed this kind of management report. In response to concerns about the effectiveness of Nasdaq’s listing department, we recommended in 1998 that SEC require NASD to develop management reports based on overall program statistics. The resulting quarterly reports to senior Nasdaq management and OCIE include data on the number and disposition of listing applications, number and reasons for noncompliance with continued listing standards, disposition of companies that do not comply with the continued listing standards, requests for and results of hearings, status of companies granted temporary exceptions to the continued listing standards as a result of hearings, and number of and reasons for delistings. As a result of a 1998 OCIE recommendation, NYSE submits reports containing similar information to the NYSE Board of Directors and, upon request, to OCIE. According to an OCIE official, the resulting quarterly reports are useful for monitoring the listing activities of these markets. Amex’s use of its discretion to initially list and continue to list companies that do not meet the exchange’s quantitative guidelines for equities could mislead investors, who are likely to assume that the companies listed on Amex meet the exchange’s listing guidelines. Because investors are entitled to clear information for use in making investment decisions, they should be informed when listed companies do not meet these guidelines. Amex has reiterated its concern about the potentially negative impact of being the only market to publicly identify listings that do not meet its guidelines. The Nasdaq SmallCap Market already uses stock symbol modifiers for companies that do not meet its continued listing standards. Also, OCIE officials told us they would recommend that other markets disclose noncompliance with their continued listing standards. (OCIE did not identify noncompliance with initial listing standards as an issue.) Ultimately, Amex could avoid concerns about the negative impact of public disclosure by adopting firm quantitative guidelines. In the meantime, including the recommendations that Amex rejected in the OCIE annual reports to the SEC Commissioners—who have the authority to require their implementation—would provide an additional incentive for Amex to act. Notwithstanding Amex’s expectation that changes to its listing program would result in diminished use of its discretion, the ongoing concerns about weaknesses in program operations and the potentially negative impact of exchange practices on public confidence warrant continued monitoring of Amex’s listing program. Both Amex and OCIE could use routine management reports that reflect the performance of the exchange listing program to improve oversight of the program. Amex officials did not use aggregated and analyzed information on the results of the listing process to help judge its overall effectiveness, including that of its exceptions-granting policies or its initial and continued listing guidelines. Such information would include, among other things, the number and percentages of companies listed that have exceptions to the initial and continued listing guidelines, the number and percentages of companies in each group that are delisted, the reasons for the delistings, and the turnover rate for listings. Aggregating and analyzing such information could help Amex and OCIE to identify and address weaknesses in Amex’s listing program operations. As part of SEC’s ongoing efforts to ensure that Amex addresses weaknesses in the management of its equity listing program, we recommend that the Chairman, SEC, direct Amex to implement mandatory quantitative equity listing requirements or provide ongoing public disclosure of noncompliant companies, and require Amex to report quarterly to its Board of Governors on the operating results of its equity listing program and make these reports available to OCIE for review. Such reports should contain sufficient information to demonstrate the overall effectiveness of the Amex equity listing program, including, at a minimum, that of its exceptions-granting policies and its initial and continued listing guidelines. We obtained written comments on a draft of this report from Amex and SEC officials. The written comments are presented in appendixes I and II, respectively. Amex committed to taking action to address our recommendation for improving public disclosure of its listing requirements by replacing its discretionary guidelines with mandatory initial and continued listing standards (see appendix I, exhibits A and B). Also in response to our recommendation, Amex committed to enhancing its management reports as they relate to its initial listing program. SEC officials commented that they were pleased that Amex would be making changes to its listing program that would address the findings and recommendations outlined in our report, and they said they would continue working with Amex to ensure that the proposed changes are implemented effectively. Amex noted in its comment letter that its proposals are broad and that the various details would be finalized as part of the rule approval process, which involves SEC. In earlier discussions with Amex about its draft proposals, we expressed the view that Amex’s rules would provide for greater investor protection if they included specific time frames for notifying the public about material events related to a company’s listing status. For example, such time frames would provide for expeditiously notifying the public after Amex advises a company that delisting proceedings are to be initiated. We also observed that Amex had not established other critical time frames for procedures such as advising a company that it does not meet the exchange’s continued listing requirements. Amex indicated in its comment letter that it intends to include applicable time frames as it works out the details of its proposals. SEC officials told us that they would work with Amex to ensure that appropriate time frames are established. In agreeing to enhance its management reports to address our recommendation, Amex acknowledged the potential value of these reports in light of proposed changes to its initial listing requirements. Under these proposed changes, companies could qualify for initial listing under Amex’s “regular” listing standards or, subject to mitigating circumstances, under its less stringent “alternative” standards. Amex committed to enhancing its management reports with information on companies that have been approved under the proposed alternative standards to provide for executive management review of the continued status of such companies, as compared with those approved for listing pursuant to its regular listing standards. Amex believes that its enhanced management reports should be useful in providing feedback on the application of the alternative standards to the Amex Board of Governors, Amex Committee on Securities, and SEC. SEC officials told us that they would use the enhanced reports to monitor implementation of the alternative standards. Although we support the changes proposed by Amex, we believe that the management reports would be of even greater use to Amex and SEC in their oversight if they included data on the effectiveness of Amex’s practices for continued listings in addition to data on the exchange’s exceptions-granting practices for initial listings. Our report discussed the kinds of aggregated and analyzed data that would be important to include in Amex’s management reports and that Nasdaq and NYSE include in their reports. Amex would benefit by working with SEC to ensure that the exchange’s reports contain similar information. To describe the key differences between the Amex initial and continued equity listing guidelines and the equity listing standards of other U.S. stock markets, we compared the quantitative and qualitative guidelines and standards of the seven U.S. markets that are registered to trade stock and that have listing requirements. These markets include six national securities exchanges—Amex, the Boston Stock Exchange, the Chicago Stock Exchange, NYSE, the Pacific Exchange, and the Philadelphia Stock Exchange—and one national securities association, the Nasdaq Stock Market. The seventh national securities exchange, the Cincinnati Stock Exchange, trades only stocks that are listed on other exchanges and does not have listing standards. We also interviewed officials from SEC’s OCIE and from Amex, Nasdaq, and NYSE to gain a further understanding of the initial and continued listing requirements of each market. This report places greater emphasis on the results of our comparison of Amex guidelines with the standards of Nasdaq and NYSE, because about 98 percent of U.S. common stocks were subject to the listing requirements of one of these three markets at the time of our review. In reviewing OCIE recommendations to Amex for improving its equity listing program, we discussed the contents of the April 2001 inspection report and Amex’s written response to it with officials of OCIE and Amex’s Listings Qualifications Department and Office of General Counsel, focusing on the areas of disagreement between OCIE and Amex. Additionally, we examined OCIE’s 1997 inspection report on Amex’s listing activities, Amex’s response, and associated correspondence to determine the nature of weaknesses identified in the OCIE inspection and how they were resolved. We also reviewed related GAO reports. To examine how Amex monitors the effectiveness of its equity listing department operations, we interviewed Amex and OCIE officials. We also reviewed related GAO reports and examined the Nasdaq and NYSE quarterly management reports that are provided to OCIE. We conducted our work in Chicago, IL; New York, NY; and Washington, D.C., from November 2000 through October 2001, in accordance with generally accepted government auditing standards. As agreed with you, unless you publicly release its contents earlier, we plan no further distribution of this letter until 30 days from its issuance date. At that time, we will send copies to the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing, and Urban Affairs and the House Committee on Financial Services; the Chairman of the House Energy and Commerce Committee; and other interested congressional committees and organizations. We will also send copies to the Chairman of SEC and to the Chairman and Chief Executive Officer of Amex. Copies will also be made available to others upon request. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678, [email protected], or contact Cecile Trop, Assistant Director, at (312) 220-7705, [email protected]. Key contributors include Neal Gottlieb, Roger Kolar, Anita Zagraniczny, and Emily Chalmers.
The Securities and Exchange Commission (SEC) has indicated that one-third of Amex's new listings did not meet the exchange's equity listing standards. Amex's listing guidelines address factors that are the same or similar to those addressed by other U.S. stock markets. Quantitative requirements addressed share price, stockholders' equity, income, and market value of publicly held shares. However, the minimum thresholds for meeting these requirements varied to reflect the differences in the companies that each market targeted for listing. The most significant difference between Amex's guidelines and the listing standards of other U.S. stock markets was that Amex was one of only two markets that retained discretion to initially list companies that did not meet all of its quantitative requirements. Amex had not implemented the Office of Compliance Inspections and Examinations' (OCIE) recommendations on the exchange's discretionary listing decisions. OCIE officials told GAO that in the absence of an Amex agreement to address the recommendations, they would include them among the open significant recommendations to be reported to the SEC Commissioners as a result of a 1998 GAO recommendation. The Commission can require Amex to implement OCIE's recommendations. Amex officials said that the exchange was fulfilling its self-regulatory organization responsibilities by individually monitoring the status of companies that did not meet its continued listing guidelines and by summarizing information in monthly reports to management.
Casualty assistance has evolved over the past several decades. In years past, survivors were notified of a servicemember’s death via telegram or a letter of condolence and were not provided assistance with applying for benefits. Today, casualty assistance has grown to encompass numerous benefits available to survivors as well as DOD and Coast Guard requirements with respect to the provision of casualty assistance for survivors. Section 562 of the National Defense Authorization Act for Fiscal Year 2006 required the Secretary of Defense to prescribe policy and procedures for the provision of casualty assistance that are, with some exceptions, uniform across the military departments. Additionally, we reported in 2006 that DOD did not have a comprehensive oversight framework or standards to monitor casualty assistance provided to survivors, among other things. We recommended that DOD develop an oversight framework that includes measurable DOD-wide objectives for casualty assistance programs and that the department incorporate standards, such as a comprehensive checklist of duties for casualty assistance officers, when revising its casualty matters instruction. In 2008 DOD issued its revised casualty matters instruction, which strengthened oversight of casualty assistance and required casualty assistance procedures to be uniform throughout DOD. A list of related GAO products is included at the end of the report. The present-day casualty assistance process entails numerous requirements, many of which must be addressed quickly following a servicemember’s death, as described in the casualty assistance guidance of DOD, its military services, and the Coast Guard. Soon after the death of a servicemember, the casualty assistance process begins by notifying the next of kin of the death. This is usually performed by a uniformed military servicemember who is accompanied by a chaplain, if available, or another service personnel member. The notification team is trained to professionally and compassionately deliver news that expresses the secretary of the service’s condolences and broadly describes the circumstances surrounding the servicemember’s death. Following notification, a casualty assistance officer begins assisting the person whom the deceased servicemember authorized to make funeral arrangements. The casualty assistance officer assists designated survivors in receiving the death gratuity payment that could help with any immediate financial needs the survivors may have. Casualty assistance officers also assist survivors with initiating the processes for obtaining federal benefits and entitlements, receiving the servicemember’s personal effects, and obtaining copies of any completed investigation reports associated with the servicemember’s death. The casualty assistance officer continues to assist the survivor until the survivor determines that he or she no longer needs assistance of the level provided by the casualty assistance officer or most benefits have been dispersed. Once the survivor no longer needs the assistance provided by casualty assistance officers, the survivor may choose to receive assistance from the long-term assistance programs, which are available to provide support throughout a survivor’s lifetime. Long-term assistance may include providing answers to survivors’ questions or help with issues concerning benefits. The eligibility of these benefits may be affected by life changes that occur during a survivor’s lifetime, such as remarriage or children turning 18 years of age. The provision of long-term case management for casualty assistance was prescribed in section 562 of the National Defense Authorization Act for Fiscal Year 2006. Since then, each of DOD’s military services and the Coast Guard, as detailed below, have provided long-term assistance to support survivors: Army. The Army’s Survivor Outreach Services is its official program designed to provide long-term support to survivors of deceased soldiers. Survivor Outreach Services provides support for survivors through specially trained support coordinators and financial counselors. Navy. The Navy Gold Star Program is its official program for providing long-term support to survivors of sailors who die while on active duty. The Navy Gold Star Program facilitates counseling and other support services, such as organizing survivor events. Additionally, the Navy Long Term Assistance Program is available to address questions or issues related to survivor benefits. Air Force. The Air Force Families Forever program provides dedicated outreach and support to Air Force survivors. The program is organized to provide family care experts at Airman and Family Readiness Centers. It provides resources, support, and information to help survivors. Marine Corps. The Marine Corps Long-Term Assistance Program is a permanent resource for survivors to ensure that they receive sustained assistance from the Marine Corps. The program provides outreach and assistance to Marine Corps survivors concerning any issues associated with the receipt of benefits and entitlements. Coast Guard. According to officials, while the Coast Guard does not have a separate long-term assistance program, casualty assistance personnel are available to address survivors’ issues and concerns for as long as needed. In addition to the support provided by the casualty assistance officers and the long-term assistance programs, the Gold Star Advocate Program is available at any point in the casualty assistance process to provide survivors with support and address issues that are raised by survivors regarding casualty assistance and the receipt of benefits. The Gold Star Advocate Program is co-located with the Casualty and Mortuary Affairs Office in the Military Community and Family Policy Office of OUSD(P&R). Figure 1 provides a general overview of the casualty assistance process. DOD also established the Casualty Advisory Board, which is responsible for developing and recommending broad policy guidance related to casualty matters. According to OUSD(P&R) officials, the board discusses issues pertaining to casualty and survivor assistance. It also acts as the focal point with other federal agencies, veterans’ service organizations, and non-profit organizations to improve support and assistance for survivors. The board meets tri-annually and is composed of voting members representing each of DOD’s military services and the Coast Guard, as well as a member designated by the Chairman of the Joint Chiefs of Staff. Within DOD, Military OneSource represents another source of support available to survivors. Military OneSource is a DOD-funded program that provides comprehensive information on aspects of military life, such as coping with deployments and spousal employment and education, at no cost to active duty servicemembers and their families. For survivors of servicemembers who died while on active duty, officials told us that Military OneSource provides grief counseling, tax assistance (such as assistance with filing the deceased servicemember’s final tax return), and assistance with obtaining benefits. There are also organizations outside of DOD and the Coast Guard that provide casualty assistance. For example, the Department of Veterans Affairs (VA) administers two monetary benefits available to survivors: the Servicemembers’ Group Life Insurance program, which is purchased by the VA from an insurance company for military personnel; and Dependency Indemnity Compensation, which provides long-term monthly payments to eligible surviving spouses and children. The Social Security Administration also provides support to survivors through monthly payments. There are also several survivor advocacy groups that provide support to survivors. For example, Tragedy Assistance Program for Survivors, a non-profit organization, provides care to survivors through a national peer support network and connection to grief resources. The Gold Star Wives of America, Inc., is another survivor advocacy group that works to improve the benefits of surviving spouses. Finally, depending on the circumstances surrounding the servicemember’s death, the survivor may receive one of two types of survivor lapel buttons. Survivors of servicemembers whose deaths occurred in certain conflicts or military operations are entitled to wear the Gold Star Lapel Button, which is governed by statute and was established by Congress in 1947. Survivors of servicemembers who died while on active duty due to other circumstances receive the Next of Kin Lapel Button. DOD and the Coast Guard have taken steps to implement the Gold Star Advocate Program, in that they have designated Gold Star Advocates who have received, addressed, and reported a variety of issues raised by survivors. However, neither DOD nor the Coast Guard has developed policies to manage the program, including Gold Star Advocate Program roles, responsibilities, and procedures. Additionally, DOD and the Coast Guard have conducted outreach to survivors for the program, but they have not determined goals and metrics for outreach. In 2014, DOD and the Coast Guard took steps to implement the Gold Star Advocate Program to address the requirements of section 633 of the National Defense Authorization Act for Fiscal Year 2014. This provision required each service secretary to designate a servicemember or civilian employee to assist survivors by (1) addressing complaints regarding casualty assistance or receipt of benefits; (2) providing support regarding casualty assistance or receipt of benefits; and (3) making reports regarding the resolution of complaints, including recommendations regarding the settlement of claims with respect to benefits. Following passage of the act, in June 2014 DOD’s military services and the Coast Guard identified the Gold Star Advocates within their respective departments. According to DOD officials, the Gold Star Advocates are subject matter experts in casualty matters. Additionally, according to OUSD(P&R) officials, in June 2015 DOD designated a department-level Gold Star Advocate who was retitled the DOD Gold Star Advocate Program Manager in January 2016. OUSD(P&R) officials stated that the Gold Star Advocate Program is intended to provide support to survivors by addressing issues raised by survivors. According to officials with OUSD(P&R), DOD’s military services, and the Coast Guard, these issues are raised via several methods. For example, issues are received in emails and phone calls from survivors, and they are tracked by the Gold Star Advocates in spreadsheets, in email folders designated for tracking issues, or in databases. The Gold Star Advocates may themselves also raise issues to the attention of the program. For example, if a service Gold Star Advocate determines that an issue stems from a gap in policy related to survivors, or that the issue may warrant a change in policy, the Gold Star Advocate can identify the issue for further review. Additionally, issues raised to the program are documented and tracked in a ledger maintained by the DOD Gold Star Advocate Program Manager. The ledger contains information on the issue, its origin, how it was raised to the program, and its ultimate resolution, among other things. Issues are also documented in briefing slides that are reported at tri-annual meetings of the Casualty Advisory Board, as the Casualty Advisory Board is able to recommend any policy changes that may be necessary. According to meeting minutes, they report on and discuss all issues received since the last meeting, as well as any updates to the resolution of previously received issues. Finally, according to DOD officials, the Gold Star Advocates from DOD’s military services and the Coast Guard hold monthly meetings with DOD’s Gold Star Advocate Program Manager during which they discuss issues raised to the program. DOD officials stated that the Gold Star Advocates primarily address survivors’ issues as they relate to casualty assistance policy. DOD officials stated that when survivors raise issues to the Gold Star Advocate Program it is usually not because they have received unsatisfactory casualty assistance, but rather because they disagree with the policies that govern the services they received. Several of the issues that the Gold Star Advocate Program addresses involve agencies or entities outside of DOD’s casualty assistance programs. In these cases, the program coordinates with the agency or entity that is best equipped to resolve the issue. According to OUSD(P&R) officials, the Gold Star Advocate Program ensures that the agency or entity is made aware of the issue, coordinates with the agency on a plan of action, and follows up to determine what action was taken and whether the issue was resolved. The Gold Star Advocate Program addresses a variety of issues raised by survivors. According to OUSD(P&R) officials, many survivor requests of the program are for replacements of the Gold Star Lapel or Next of Kin Lapel buttons provided to the primary next of kin of servicemembers who die in certain conflicts or military operations or on active duty due to other circumstances. Since 2014, the program has addressed 12 other discrete issues, including a travel and moving expense claim issue raised by a survivor who did not receive a moving expense settlement because the survivor had not submitted the requisite claim within the 2-year time limitation. The Gold Star Advocate Program reported the issue to the Defense Travel Management Office, which granted an exception to the time limitation and reimbursed the survivor for the moving expenses. As an example of an issue that is currently being addressed, the Gold Star Advocate Program was made aware that stepchildren of servicemembers who died while on active duty were having their DOD identification card privileges and medical benefits revoked when their biological parent remarried. The revocations were based on Defense Enrollment Eligibility Reporting System personnel’s interpretation of an instruction provision. The issue was reported to the Defense Human Resources Agency and the Defense Health Agency. The Defense Health Agency General Counsel found that the stepchildren should retain their benefits even if their biological parent remarries. Additionally, the Defense Human Resources Agency is working on a communications plan to disseminate this information. Appendix II provides a complete list of the issues addressed by the Gold Star Advocate Program through March 2016. According to DOD officials, few issues have risen to the level of the Gold Star Advocate Program because survivor issues are generally resolved by casualty assistance officers and the long-term assistance programs. For example, in 2015 the Marine Corps Long-Term Assistance Program addressed 29 cases of survivor issues with benefits and compensation and 30 cases of issues concerning personal effects, among others. Similarly, from October 2015 through the beginning of December 2015, the Navy Long Term Assistance Program addressed 24 survivor issues concerning benefits, among others. According to Air Force officials, most survivor issues are handled at family readiness centers at the installation level, and three issues were addressed by the Air Force Families Forever program. Army officials stated that one reason why issues are resolved without rising to the level of the Gold Star Advocate is because the Army has an active outreach program that allows them to address issues as they arise. DOD and the Coast Guard have taken steps to implement the Gold Star Advocate Program, but they have not established policies to manage the program. For example, DOD and the Coast Guard have designated Gold Star Advocates, but they have not clearly defined their roles and responsibilities in policy. Moreover, DOD and the Coast Guard have procedures for addressing and reporting on issues raised by survivors, but they have not established these procedures in policy. Federal internal control standards state the importance of internal control activities, such as policies and procedures, to help ensure that management’s directives are carried out and that the organization’s missions, goals, and objectives are met. Internal controls also help ensure compliance with laws and regulations. A good internal control environment requires that the organizational structure clearly defines key areas of authority and responsibility and establishes appropriate lines of reporting. Establishing policy for a program can aid in establishing internal control and defining the program’s roles, responsibilities, and procedures. For example, DOD’s casualty matters instruction defines the Casualty Advisory Board’s roles, responsibilities, and procedures for developing and recommending policy guidance related to casualty matters. Table 1 lists the key statutory requirements of section 633 of the National Defense Authorization Act for Fiscal Year 2014, the extent to which DOD and the Coast Guard have implemented those requirements, and whether DOD and the Coast Guard have issued policy on those requirements. Existing policies governing casualty assistance matters have not been updated to include policies for the Gold Star Advocate Program because the Gold Star Advocate Program was just implemented in 2014. According to DOD officials, the Gold Star Advocate Program is planned for inclusion in its revision to its casualty matters instruction. In an April 2015 update provided to Congress on the implementation of section 633 of the National Defense Authorization Act for Fiscal Year 2014, DOD stated that the revision was expected to be published in summer 2016; however, according to OUSD(P&R) officials, the revision is not expected to be completed until 2017, due to internal review and approval procedures and the necessity of issuing another casualty-related policy first. While OUSD(P&R) officials provided a timeline for when they anticipate issuing the revised casualty matters instruction, they did not provide documentation of a draft. According to a Coast Guard official, the Gold Star Advocate Program is also planned for inclusion in a revision of its casualty matters instruction; however, officials stated that the revision process will take several years. Therefore, the Coast Guard was also not able to provide documentation of a draft. While the Gold Star Advocate Program has not yet been included in the casualty matters instructions for DOD or the Coast Guard, interim policy covering the Gold Star Advocate Program could be promulgated in other ways. For example, according to OUSD(P&R) officials, interim policy for the program could be established in a charter. Similarly, DOD and the Coast Guard could issue memoranda establishing the roles, responsibilities, and procedures for the program. However, while the program is ultimately planned for inclusion in the revision to the casualty matters instructions of DOD and the Coast Guard, according to DOD and Coast Guard officials, in the interim preceding the revision, policies covering the program have not been established. Until policies to govern the Gold Star Advocate Program are established that outline roles, responsibilities, and procedures, it may be difficult for DOD and the Coast Guard to ensure that the program’s mission, objectives, and statutory requirements under section 633 of the National Defense Authorization Act for Fiscal Year 2014 are carried out consistently and sustained. In addition to taking steps to implement the Gold Star Advocate Program, DOD and the Coast Guard conduct some outreach for and publicize the program to survivors using a variety of both direct and indirect methods and media, but they have not developed outreach goals or metrics. As an example of direct outreach, DOD and the Coast Guard provide survivors with DOD’s A Survivor’s Guide to Benefits, which contains information on funeral and burial arrangements, benefits, and survivor support services, among other things. The guide also contains a section on the Gold Star Advocate Program and provides contact information for the Gold Star Advocates of each of DOD’s military services and the Coast Guard, along with the contact information for the DOD Gold Star Advocate. The guide explains that the purpose of the Gold Star Advocate Program is to provide support to survivors through addressing issues raised by survivors. It informs survivors that they can contact the Gold Star Advocates if they have any concerns with the casualty assistance provided to them. Survivors are also made aware of the program through letters sent by DOD’s military services and the Coast Guard to survivors at different times following the servicemember’s death. For example, the Navy sends letters at the 60-day and 1-year points following the death of a servicemember, according to Navy officials. The Gold Star Advocate Program may also be publicized to survivors through their casualty assistance officers. However, while casualty assistance officers could serve as a primary means of providing information about the program to survivors, of the casualty assistance officer training materials we reviewed only the Marine Corps’ contained specific information about the Gold Star Advocate Program. According to OUSD(P&R) officials, outreach for the Gold Star Advocate Program is also conducted through some less direct methods. For example, outreach for the program is conducted at survivor forums sponsored by DOD and the VA. These forums are held quarterly and are attended by DOD’s military services and the Coast Guard, federal agencies involved in casualty matters, and survivor advocacy groups, according to DOD officials. The Gold Star Advocate Program is also publicized on the Military OneSource website, which provides contact information for the Gold Star Advocates of each of DOD’s military services and the Coast Guard, along with that of the DOD Gold Star Advocate, and explains the purpose of the program. OUSD(P&R) officials stated that they monitor the number of “hits” the Military OneSource website receives for its Gold Star Advocate Program webpage. Each of DOD’s military services and the Coast Guard also has a separate website that provides long-term assistance information for survivors. These websites identify the long-term assistance services available as well as casualty assistance support contact information; however, only the websites of the Marine Corps and Army contain information on their respective Gold Star Advocates and the Gold Star Advocate Program. Finally, an article announcing the Gold Star Advocate Program was published in July 2014 on the Gold Star Wives of America, Inc., website, informing members of that advocacy group. While OUSD(P&R), DOD’s military services, and the Coast Guard conduct outreach and publicize the Gold Star Advocate Program to survivors using a variety of methods and media, they have not determined goals or metrics for this outreach. Key practices for conducting consumer education include, among other things, defining goals and objectives and establishing metrics to measure success in achieving objectives. For example, a goal can be to increase awareness of the Gold Star Advocate Program. Once a goal is clearly defined, DOD and the Coast Guard can establish the necessary targets to measure the effectiveness of their outreach efforts. Establishing process and outcome metrics could also help DOD and the Coast Guard determine whether the current resources dedicated to outreach need to be adjusted. According to officials from each of DOD’s military services and OUSD(P&R), they do not have a goal or plan to reach back to survivors of servicemembers who died prior to the program’s implementation to increase awareness of the program among these survivors. OUSD(P&R) officials emphasized that reaching back to these survivors is not a statutory requirement. However, while the Gold Star Advocate Program is available to serve survivors of all servicemembers who died while on active duty, its direct outreach methods are primarily directed toward the survivors of servicemembers who have died since the program was implemented. For example, survivors of servicemembers who have died since the program was implemented receive information about it from casualty assistance officers, DOD’s A Survivor’s Guide to Benefits, and the letters DOD’s military services and the Coast Guard send to survivors at different times following the servicemember’s death. Several of DOD’s military services noted resource constraints in making all survivors aware of the Gold Star Advocate Program, especially with respect to contacting survivors of servicemembers who died prior to the program’s implementation. OUSD(P&R) officials stated that some of these survivors may have progressed in their stages of grief such that direct outreach may now be more painful for them than helpful. According to Navy officials, the Navy Gold Star Program, which is separate from the Gold Star Advocate Program, contacted those survivors of servicemembers who died prior to the development of the Navy Gold Star Program to let them know of the Navy program and found that while some were appreciative of the contact, others were upset by it. According to Coast Guard officials, the Coast Guard is also planning to reach out to survivors of Coast Guard servicemembers who have died since September 2001, to make them aware of the Coast Guard’s casualty assistance services that are available to them. Although DOD does not have plans to directly contact survivors of servicemembers who died prior to the program’s implementation, most of the issues the Gold Star Advocate Program has received have originated with survivors of servicemembers who died prior to the implementation of the program. For example, of the 12 issues raised to the program, 7 originated from survivors of servicemembers who died prior to the program’s implementation. An additional 4 issues were raised to the program by multiple survivors, so there is not a servicemember date of death associated with these issues. Only one of the issues raised to the program originated from a survivor of a servicemember who died since the program was implemented. Additionally, as the Gold Star Advocate Program addresses issues that are primarily not related to specific benefits but rather to policy concerns, it is possible that more policy concerns could originate from survivors of servicemembers who died prior to 2014. For the issues raised to the Gold Star Advocate Program, the ways in which the survivor was made aware of the program were limited. For example, of the 12 issues, 2 of those survivors were made aware of the program through the Military OneSource website, and 1 through “read- ahead” slides prepared by DOD for a survivor’s forum. Three others were made aware of the program through familiarity with DOD’s Casualty and Mortuary Affairs Office, which is where the Gold Star Advocate Program is managed. For the remaining 6 issues, either the manner in which the survivor was made aware of the program is unknown, the issue was raised by one of the Gold Star Advocates, or the issue was raised by multiple survivors, so that a single manner through which the survivor was made aware of the program is not applicable. Although DOD does not have a goal or plans to reach back to survivors who predate the Gold Star Advocate program due to sensitivity concerns, there are other methods through which outreach for the program could be improved. For example, in 2007 DOD reported on outreach actions taken toward non-governmental organizations by providing DOD and service casualty office points of contact and telephone numbers. Providing this contact information was intended to aid in addressing survivor issues, as DOD noted that often these organizations are the first to discover that survivors have unresolved issues. Specifically, DOD noted that due to organizational meetings, forums, and chat-rooms on the internet, these organizations are in a unique position to discover any unresolved issues that survivors may experience long after their relationship with their casualty assistance officer has ended. During our review we also heard from a survivor that information is often disseminated among survivors through chat-rooms and social media sites. While DOD may decide not to reach back to survivors of those servicemembers who died prior to 2014 due to sensitivity concerns, it is important to ensure that outreach activities are aligned with an overall outreach goal with associated metrics in order for DOD and the Coast Guard to establish the necessary targets to measure the effectiveness of their outreach efforts. Moreover, establishing process and outcome metrics could also help DOD and the Coast Guard determine whether the current resources dedicated to outreach need to be adjusted. If OUSD(P&R), DOD’s military services, and the Coast Guard do not develop goals, such as to increase awareness of the Gold Star Advocate Program, and metrics to assess their outreach for the program, some survivors may remain unaware of the casualty assistance available to them, and consequently the program may not be able to provide support to all survivors who need it. DOD has planned, designed, and implemented training to cover the duties required of casualty assistance officers that is consistent with some attributes of an effective training program, including providing survivors with information on their benefits and entitlements and other forms of casualty assistance. However, DOD’s method of collecting survivor feedback on the quality of casualty assistance received from casualty assistance officers—information that could aid in evaluating the effect of casualty assistance officer training on program performance—has a low response rate. DOD has developed a training program for casualty assistance officers as required by section 633 of the National Defense Authorization Act for Fiscal Year 2014. This section required DOD to develop a standardized comprehensive training program on casualty assistance, to ensure that casualty assistance officers provide the spouses and other dependents of servicemembers who have died while on active duty with accurate information on the benefits to which they are entitled, as well as other casualty assistance available to them. Additionally, section 562 of the National Defense Authorization Act for Fiscal Year 2006 required policy that would include the qualifications, assignment, training, duties, supervision, and accountability for the performance of casualty assistance responsibilities. We previously developed a framework for assessing strategic training programs in the federal government that summarizes attributes of effective training programs. This framework consists of the following set of components: (1) planning/front-end analysis, (2) design/development, (3) implementation, and (4) evaluation. In planning its casualty assistance training program—the first component of the framework—DOD determined the duties needed for casualty assistance officers. DOD identifies casualty assistance officer duties in its casualty matters instruction. For example, the instruction requires all service casualty assistance officers to assist survivors until benefits have been applied for and received; to deliver DOD’s A Survivor’s Guide to Benefits; to assist survivors in obtaining new identification cards; and to provide information on available legal assistance, among other duties. Furthermore, the casualty matters instruction identifies training requirements for casualty assistance officers that include, among other topics, an overview of benefits and forms preparation, grief and trauma awareness, and public affairs information. In addition to DOD-wide guidance on the duties of casualty assistance officers, the services also planned their respective casualty assistance training programs by publishing service-level guidance that identifies each service’s casualty assistance program and the duties required for that service’s casualty assistance officers. OUSD(P&R) officials stated that, while the casualty matters instruction provides DOD’s military services with the DOD standard for casualty assistance, it does not prescribe how the services must meet that standard. Additionally, the National Defense Authorization Act for Fiscal Year 2014, section 633, provides for variations in casualty assistance officer training so as to incorporate the traditional practices or customs of a particular service. As such, each service assigns varying duties to its casualty assistance officers, which incorporate service-specific practices and customs. For example, Army, Navy, and Marine Corps casualty assistance officers are uniformed servicemembers who provide casualty assistance as a secondary duty, assisting survivors only when assigned to a casualty assistance case. However, once assigned to a case, these servicemembers’ primary duty becomes providing assistance to survivors until all benefits and entitlements have been applied for or the survivor determines he or she no longer requires assistance. Conversely, according to Air Force officials, Air Force casualty assistance officers are civilian employees whose primary job is casualty assistance, including aiding survivors in obtaining benefits and educating Air Force personnel on casualty assistance. Additionally, while the Navy and Marine Corps assign all of their casualty assistance officers with responsibility for notification of a servicemember’s death, aiding survivors with funeral arrangements, and assisting survivors with obtaining benefits and entitlements, the Army assigns notification duty to one group of casualty assistance officers and assigns assistance with funeral arrangements and benefits and entitlements to another. Army officials stated that the Army separates these duties so that survivors do not need to again see the servicemember who delivered the news of their servicemember’s death. The Air Force separates these three duties further, with one group of personnel responsible for notifications, a second group responsible for assistance with funeral arrangements, and a third group responsible for assistance with benefits and other entitlements. Table 2 identifies the title each service assigns to its casualty assistance officers, whether the duty is performed by uniformed servicemembers or civilian employees, and the duties assigned to each service’s casualty assistance officers. DOD and its military services address the second component of the training assessment framework—design/development—by designing casualty assistance training programs to cover those duties identified in both the DOD casualty matters instruction and the service-specific guidance. For example, the Marine Corps assigns casualty assistance officers to notify survivors of a servicemember’s death, assist survivors with funeral arrangements, and facilitate survivors applying for benefits and entitlements. Therefore, the Marine Corps designed its casualty assistance officer training to cover these duties. On the other hand, since the Air Force divides up notification, assistance with funeral arrangements, and assistance with benefits and entitlements among three groups of personnel, the Air Force separates its casualty assistance training and covers only the duties assigned to each group in the separate training courses. In addition to the training programs designed by the services, OUSD(P&R) has developed a simulation training program for use by all service casualty assistance officers. Officials stated that OUSD(P&R) designed the program, with input from DOD’s military services and the Coast Guard, to provide a standardized course of instruction on casualty assistance. However, the program also incorporates the customs and traditions unique to each service, depending on which service the casualty assistance officer selects at the beginning of training. The program consists of three modules—notification, funeral arrangements, and assistance with benefits and entitlements. At the beginning of each module, the program provides the casualty assistance officer with background information to prepare that officer for the upcoming scenario. Casualty assistance officers are then presented with videos of actors role- playing as survivors who “interact” with the casualty assistance officer by demonstrating some of the potential behaviors or responses the officer may experience with a survivor. Following the actors’ dialogue, casualty assistance officers are presented with multiple written responses from which to choose. Designed to portray real-life scenarios that casualty assistance officers may encounter while conducting their duties, the actors’ dispositions change depending on the response an officer chooses—correct responses result in favorable reactions, while incorrect responses may result in actors becoming uncooperative or angry. After each response, a virtual coach provides casualty assistance officers with immediate feedback explaining why the response the officer chose was correct or incorrect. OUSD(P&R) officials stated that the training will be available to the services in summer 2016. DOD addresses the third component of the training assessment framework—implementation—by delivering training to casualty assistance officers to instruct them on their duties. Due to the varying roles and duties assigned to each service’s casualty assistance officers, the delivery of each service’s training varies in format and duration, as described below: Army. According to Army officials, Army casualty assistance officer training spans 3 days of instructor-led classroom training covering notification duties; assistance with funeral arrangements, benefits, and entitlements; and grief, bereavement, and self-care. Although Army casualty assistance officers undergo training to assist survivors with notifications, funeral arrangements, and provision of benefits and entitlements, they will only perform notification or assistance with funeral arrangements, benefits, and entitlements when assigned to a casualty assistance case. Navy. Navy casualty assistance officer training consists of five modules covering an introduction to casualty assistance, notifications, funeral arrangements, assistance with benefits and entitlements, and case studies. Navy officials stated that while most casualty assistance training is provided over 2 days of instructor-led classroom training, some Navy installations cover all five modules in 1 day. Air Force. The Air Force splits notification, mortuary affairs, and benefits assistance duties among three groups of personnel, with each group attending separate training. According to Air Force officials, the group of casualty assistance officers that provides notifications to survivors undergoes 1-2 hours of training, while the group that provides assistance with funeral arrangements and the group that provides assistance with benefits and other entitlements each attend 5 days of training. Marine Corps. According to Marine Corps officials, Marine Corps casualty assistance officers undergo 1 day of instructor-led classroom training that covers eight modules, including notification, mortuary affairs, benefits and entitlements, and grief and bereavement, among other topics. Table 3 summarizes the duration of training for each service’s casualty assistance officers. DOD’s military services have also made training available in accordance with the annual requirement in section 633 of the National Defense Authorization Act for Fiscal Year 2014. For example, Army casualty assistance guidance states that casualty assistance officer certification expires 1 year after completion of training. Marine Corps guidance states that all noncommissioned and commissioned officers who could potentially be assigned as casualty assistance officers are required to receive annual training. According to Air Force officials, Air Force casualty assistance officers who provide assistance with benefits and entitlements after initial training continue to participate in monthly training sessions which cover any changes to benefits that the casualty assistance officers may need to be aware of. Additionally, casualty assistance officers receive refresher training every 3 years. According to Air Force officials, these casualty assistance officers assist survivors of both active duty and retirees as part of their full-time position, unlike other service casualty assistance officers, who only conduct casualty assistance duties when assigned to a casualty assistance case. According to Navy officials, Navy casualty assistance officers may access web-based refresher training; however, the training is being updated. During our review, Air Force officials provided a draft of their casualty assistance instruction, which will include a requirement for the initial, monthly, and triennial refresher training for their casualty assistance officers. Air Force officials stated that this revision is planned for implementation in summer 2016. Additionally, during our review, Navy officials provided a draft of their casualty assistance instruction, which will require annual training for their casualty assistance officers. According to Navy officials, both the revised instruction and the updates to the web- based training will be implemented by the end of fiscal year 2016. Table 4 summarizes the frequency with which each service makes training available to its casualty assistance officers. Although the services have implemented training to cover the duties assigned to casualty assistance officers and have made training available in accordance with the annual requirement in section 633 of the National Defense Authorization Act for Fiscal Year 2014, DOD does not include that statutory requirement in its casualty matters instruction. Additionally, when we met with OUSD(P&R) officials, they were not aware of that statutory requirement and, therefore, did not require the services to conduct training annually. Officials stated, however, that they would incorporate the requirement for at least annual training for casualty assistance officers in the upcoming revision to their casualty matters instruction. Therefore, we are not making a recommendation on this issue. DOD and its services address the fourth component of the training assessment framework—evaluation—by conducting some evaluation of their casualty assistance officer training. However, current evaluation methods may not provide the indicators needed to fully evaluate the training’s effect and the quality of the assistance provided by casualty assistance officers. To measure the real effect of training, agencies need to develop indicators that help determine how training contributes to the accomplishment of agency goals and objectives. One commonly accepted model to evaluate training consists of levels of assessment that measure (1) participant reaction to and satisfaction with the training program; (2) changes in employee skills, knowledge, or abilities; and (3) the effect of the training on program results, among others. As part of a balanced approach, assessing training should consider feedback from employees, such as casualty assistance officers, and from customers, such as survivors, as well as organizational results. DOD and its services conduct some evaluations of casualty assistance officer training programs in accordance with the first level of assessment by measuring participant reaction to training. For example, according to service officials, DOD’s military services evaluate participant reaction to training by surveying casualty assistance officers upon their completion of training. However, currently not all of DOD’s military services survey casualty assistance officers after they have served in this function to gauge casualty assistance officer satisfaction with how the training prepared them to conduct their duties. According to casualty assistance officers we met with from the Army and Marine Corps, they may complete after-action reports to provide feedback on the services’ casualty assistance processes. While this could serve as a method of providing feedback on casualty assistance officer training, these after-action reports do not specifically focus on training. Additionally, OUSD(P&R) officials stated that DOD plans to survey casualty assistance officers on DOD’s simulation training after they have provided assistance to survivors in order to better gauge how the simulation training prepares casualty assistance officers to conduct their duties. Such indicators on how prepared casualty assistance officers felt due to the training they received could prove useful in evaluating the effect of casualty assistance officer training on program performance. DOD and its services also conduct knowledge checks either during training or upon its completion, which can assess changes in skills, knowledge, or abilities in accordance with the second level of assessment. For example, the Army requires its casualty assistance officers to take an exam covering notification procedures and assisting survivors with benefits and entitlements. Similarly, at the completion of each module of DOD’s simulation training, casualty assistance officers receive a score based on their professionalism, compassion, and knowledge. Casualty assistance officers must receive a score of 80 percent or greater to earn a certificate of completion for the simulation training. DOD’s method of addressing the final level of assessment—the effect of the training on program results—may not provide DOD with the information needed to determine the effect of training on program results or the quality of casualty assistance provided to survivors by casualty assistance officers. In assessing the effect of training on performance, agencies should incorporate the perspectives of different stakeholders— such as those of customers, or in this case survivors—in assessing the effect of training. DOD currently surveys survivors on the quality of casualty assistance provided to them by casualty assistance officers via a web-based survey that could provide information on the effect of training on program results. This survey is administered in accordance with section 562 of the National Defense Authorization Act for Fiscal Year 2006, which required that DOD prescribe policy for data collection regarding the quality of casualty assistance provided to survivors of deceased servicemembers. For example, the survey queries survivors on whether the casualty assistance they received provided them with accurate information. Responses to the survey questions range from “strongly agree” to “strongly disagree.” Feedback from survivors on the quality of casualty assistance provided to them by their casualty assistance officer could provide information on the effect of casualty assistance officer training on program performance and the quality of casualty assistance provided to survivors. However, the web-based survey has historically had a low response rate. For example, the survey conducted between October 2014 and March 2015 resulted in a survivor response rate of 10 percent. With such a low response rate, DOD has acknowledged that results should be interpreted cautiously. According to Army and OUSD(P&R) officials, the Army previously utilized a telephone survey of survivors with an approximate 80 percent response rate. According to OUSD(P&R) officials, DOD is planning to institute a similar telephone survey of survivors across DOD in 2016, but it is still too early to determine whether that survey will result in a higher response rate. Without more complete feedback from survivors, DOD may be missing valuable indicators to help evaluate how casualty assistance officer training contributes to improved program performance and the quality of casualty assistance provided to survivors. The Gold Star Advocate Program has been addressing issues raised by survivors of deceased DOD and Coast Guard servicemembers since it was implemented in 2014, and DOD and the Coast Guard have conducted some outreach to survivors using several methods to inform them of the program’s availability to provide support. However, DOD and the Coast Guard have not yet developed policies establishing roles, responsibilities, and procedures for the program; nor have they determined outreach goals, along with metrics by which to measure progress in attaining those goals. Without such policies, it may be difficult for DOD and the Coast Guard to ensure that the program’s mission, objectives, and statutory requirements under section 633 of the National Defense Authorization Act for Fiscal Year 2014 continue to be carried out. Moreover, without goals and metrics for outreach, DOD and the Coast Guard may miss opportunities to reach some survivors who may be unaware of the casualty assistance available to them from the Gold Star Advocate Program. Regarding DOD’s casualty assistance officer training program, without improved indicators for evaluating the effect of casualty assistance officer training on program performance, DOD may not have the information needed to improve the quality of casualty assistance provided to survivors. To help ensure that the Gold Star Advocate Program achieves its mission and objectives and to enhance outreach for the program, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in collaboration with the service secretaries, to take the following two actions: Develop interim policies to govern the program, to include identification of roles, responsibilities, and procedures; and Determine outreach goals and metrics by which to measure progress in attaining those goals. To help ensure that the Gold Star Advocate Program achieves its mission and objectives and to enhance outreach for the program, we recommend that the Commandant of the Coast Guard take the following two actions: Develop interim policies to govern the program, to include identification of roles, responsibilities, and procedures; and Determine outreach goals and metrics by which to measure progress in attaining those goals. To improve the efficacy of the training provided, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to develop indicators to help determine how casualty assistance officer training contributes to the quality of the casualty assistance program. We provided a draft of this report to DOD and the Department of Homeland Security (DHS) for review and comment. Both DOD and DHS, responding with respect to the Coast Guard, concurred with our recommendations. Written comments from DOD and DHS are reprinted in their entirety in appendixes III and IV, respectively. DOD also provided technical comments, which we have incorporated in the report where appropriate. In its comments, DOD noted concerns with the title of our draft report. Specifically, DOD stated that the report identifies issues with developing metrics and measurements and codifying policy regarding the Gold Star Advocate Program, rather than issues with casualty assistance provided to survivors. The Gold Star Advocate Program and casualty assistance officer training both represent subsets of DOD’s casualty assistance program. Therefore, we changed the report title to specifically identify the subsets which the report addresses and to reflect the recommendations contained in the report. DOD concurred with our recommendation to develop interim policies to govern the Gold Star Advocate Program, to include the identification of roles, responsibilities, and procedures. DOD stated that the policies regarding the Gold Star Advocate Program are being incorporated into the revision of its casualty matters instruction and that a charter is being established for the program in the interim. DOD also concurred with our recommendation to determine outreach goals and metrics for the program. DOD stated that questions regarding the Gold Star Advocate Program will be included in its planned telephonic survey of survivors, with a goal of having the majority of survivors interviewed being aware of the program. DOD also stated that the department will determine other outreach goals and metrics within the next 6 months. Additionally, DOD concurred with our recommendation to develop indicators to help determine how casualty assistance officer training contributes to the quality of the casualty assistance program. DOD noted that it has developed two surveys to gauge how effective casualty assistance officers found the simulation training developed by OUSD(P&R) to be (as discussed in this report). We agree that this is a good first step with respect to determining how casualty assistance officer training contributes to the quality of the casualty assistance program, but as we stated in our report, it would also be beneficial to incorporate the perspectives of stakeholders—such as survivors—in assessing the effect of training on program performance. DHS also concurred with our recommendation to develop interim policies to govern the Gold Star Advocate Program. DHS stated that the Coast Guard will publish a policy memorandum announcing the formal establishment of the program and outlining the program’s policies, procedures, and responsibilities. Additionally, DHS concurred with our recommendation to determine outreach goals and metrics for the program. DHS noted several steps that it plans to take, including creating a Gold Star Advocate Program webpage and developing measures for outreach effectiveness. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Chairman of the Joint Chiefs of Staff, the Secretaries of the military departments, the Secretary of Homeland Security, and the Commandant of the Coast Guard. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The scope of our engagement included the casualty assistance programs of the Department of Defense (DOD), the Army, the Navy, the Air Force, the Marine Corps, and the Coast Guard. We obtained data on the number of servicemembers who died while on active duty and the number of surviving dependents from the Defense Manpower Data Center and the Coast Guard, from January 2002 through November 2015. For our first objective, to determine the extent to which DOD and the Coast Guard have implemented the Gold Star Advocate Program, we compared DOD and Coast Guard policies on casualty matters and the meeting minutes of DOD’s Casualty Advisory Board, the panel responsible for developing and recommending casualty-related policy and guidance, with the statutory requirements of the National Defense Authorization Act for Fiscal Year 2014, section 633, for the Gold Star Advocate Program. We also compared DOD and Coast Guard policies on casualty matters with federal control standards, which state the importance of internal control activities, such as policies and procedures, to help ensure that management’s directives are carried out and that the organization’s missions, goals, and objectives are met. We also interviewed officials involved in the Gold Star Advocate Program at the Office of the Under Secretary of Defense for Personnel and Readiness (OUSD(P&R)), the Army, the Navy, the Air Force, the Marine Corps, and the Coast Guard to understand how the Gold Star Advocates were designated, and the methods used to address and report on issues raised by survivors to the Gold Star Advocate Program. To determine the extent to which DOD and the Coast Guard have conducted outreach to survivors for the Gold Star Advocate Program, we compared DOD’s outreach for the program with best practices for consumer education planning. We reviewed section 633 of the National Defense Authorization Act for Fiscal Year 2014 for statutory requirements for the Gold Star Advocate Program related to outreach. We also reviewed DOD and Coast Guard policy for casualty matters, and DOD’s Casualty Advisory Board meeting minutes, for policies or guidance related to outreach for the program. We interviewed officials from the Gold Star Advocate Program at OUSD(P&R), the Army, the Navy, the Air Force, the Marine Corps, and the Coast Guard to determine how they conduct outreach for the program, and we interviewed a non-generalizable range of survivor advocacy groups and casualty assistance officers to determine their level of understanding of the Gold Star Advocate Program. For the second objective, to determine the extent to which DOD has developed a training program for casualty assistance officers consistent with attributes of an effective training program, we compared DOD’s casualty matters instruction, service-level casualty assistance guidance, and DOD and service casualty assistance officer training materials against the training assessment framework identified in A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. We also compared the casualty assistance officer training frequency requirements, if any, in DOD’s casualty matters instruction and service-level casualty assistance guidance to the National Defense Authorization Act for Fiscal Year 2014, section 633, which requires casualty assistance officer training no less often than annually. We conducted a content analysis of DOD’s casualty matters instruction and the service-level casualty assistance guidance and training materials to determine the extent to which DOD has developed training that identifies the duties needed for casualty assistance officers and links training to these duties. To do so, two analysts independently reviewed and assessed the casualty assistance officer duties identified in the DOD and service-level guidance to determine whether the training materials addressed the duties. The analysts then compared their results to identify any disagreements and reached agreement on all items through discussion. We also reviewed section 562 of the National Defense Authorization Act for Fiscal Year 2006, which required DOD to prescribe policy on data collection regarding the quality of casualty assistance provided to survivors, and the National Defense Authorization Act for Fiscal Year 2014, section 633, which required DOD to develop a training program on casualty assistance. We reviewed the DOD Survivor Survey, intended to collect survivor feedback on the quality of casualty assistance provided by the military services, among other things, and analyzed the response rate to the survey over the period of October 2014 through March 2015. We found data from the survey to be of undetermined reliability, since the response rate is too low to serve as a reliable source of information for evaluating the quality of casualty assistance provided to survivors. With such a low response rate, OUSD(P&R) officials also acknowledged that DOD Survivor Survey responses should be interpreted cautiously. We interviewed officials from OUSD(P&R) and DOD’s military services to determine how they implement their respective casualty assistance officer training programs. Finally, we interviewed casualty assistance officers to understand their experience serving as casualty assistance officers and the training they received to prepare them to serve in this capacity. Table 5 contains a complete list of the agencies and offices we contacted during the course of our review. We conducted this performance audit from June 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 6 provides a complete list of the issues received and addressed by the Gold Star Advocate Program through March 2016. The contents of the table are based on the Office of the Under Secretary of Defense for Personnel and Readiness (OUSD(P&R))’s documentation of issues addressed by the program. In addition to the contact named above, Kimberly C. Seay (Assistant Director), Gustavo Crosetto, Clifton G. Douglas Jr., Cynthia Grant, Amie Lesser, Amanda Manning, Elisha Matvay, Michael McKemey, Shahrzad Nikoo, Terry Richardson, and Cheryl Weissman made major contributions to this report. Defense Health: Actions Needed to Help Ensure Combat Casualty Care Research Achieves Goals. GAO-13-209. Washington, D.C.: February 13, 2013. Veterans’ Pension Benefits: Improvements Needed to Ensure Only Qualified Veterans and Survivors Receive Benefits. GAO-12-540. Washington, D.C.: May 15, 2012. Military and Veterans’ Benefits: Analysis of VA Compensation Levels for Survivors of Veterans and Servicemembers. GAO-10-62. Washington, D.C.: November 13, 2009. Military Personnel: DOD Needs an Oversight Framework and Standards to Improve Management of Its Casualty Assistance Programs. GAO-06-1010. Washington, D.C.: September 22, 2006. Financial Management: Implications of Significant Recent and Potential Changes for the Actuarial Soundness of the Department of Defense Survivor Benefit Plan Program. GAO-06-837R. Washington, D.C.: July 26, 2006. Military Personnel: DOD Needs Better Controls over Supplemental Life Insurance Solicitation Policies Involving Servicemembers. GAO-05-696. Washington, D.C.: June 29, 2005. Military Personnel: Survivor Benefits for Servicemembers and Federal, State, and City Government Employees. GAO-04-814. Washington, D.C.: July 15, 2004. Military Personnel: Active Duty Benefits Reflect Changing Demographics, but Opportunities Exist to Improve. GAO-02-935. Washington, D.C.: September 18, 2002.
From January 2002 through November 2015, 17,911 servicemembers died while on active duty, leaving approximately 24,000 surviving dependents. The military services' casualty assistance programs guide these survivors through the casualty assistance process following the death of a servicemember. Senate Report 114-49 included a provision that GAO review the Gold Star Advocate Program and the training provided for casualty assistance officers. This report assesses the extent to which (1) DOD and the Coast Guard have implemented the Gold Star Advocate Program and conducted outreach to survivors; and (2) DOD has developed a training program for casualty assistance officers consistent with attributes of an effective training program. GAO analyzed statutes, DOD and Coast Guard policies on casualty matters, and DOD's military services' casualty assistance guidance and training materials. GAO interviewed officials involved in the Gold Star Advocate Program at DOD, its military services, and the Coast Guard—which is part of the Department of Homeland Security (DHS). The Department of Defense (DOD) and the Coast Guard took steps to implement the Gold Star Advocate Program in 2014 by designating Gold Star Advocates who have received, addressed, and reported a variety of issues raised by survivors, and they conducted some outreach to survivors for the program, but they have not established policies to manage the program. The National Defense Authorization Act for Fiscal Year 2014 required the designation of personnel to provide support to survivors of servicemembers who died while on active duty. Known as Gold Star Advocates, these personnel are available at any point in the casualty assistance process. If a survivor is not satisfied with the casualty assistance he or she has received, the survivor may contact a Gold Star Advocate. According to DOD officials, few issues have risen to the level of the program's attention because survivor issues are generally resolved by casualty assistance officers—who serve as liaison between the survivor and the service branch following the death of a servicemember, and assist with funeral arrangements and the application and receipt of benefits and entitlements—and long-term assistance programs, which are available to provide support throughout a survivor's lifetime. However, while steps have been taken to implement the program, neither DOD nor the Coast Guard has established policies for the program, including roles, responsibilities, and procedures. Additionally, although DOD and the Coast Guard have conducted some outreach for the program, they have not developed goals and metrics for outreach, without which some survivors may remain unaware of the casualty assistance available to them. While the program is available to serve survivors of all servicemembers who died while on active duty, its outreach methods are primarily directed toward survivors of servicemembers who have died since the program was implemented in 2014. DOD and its military services have developed a casualty assistance officer training program that addresses the duties required of casualty assistance officers that is consistent with some attributes of an effective training program, but DOD and its military services may not have the indicators needed to evaluate the effect of that training on casualty assistance program performance. For example, DOD administers a web-based survey to survivors regarding the quality of casualty assistance they received, but the survey has roughly a 10 percent response rate. With such a low response rate, DOD acknowledged that results should be interpreted cautiously. Without improved indicators for evaluating the effect of casualty assistance officer training, DOD may not have the information needed to improve the quality of casualty assistance provided to survivors. GAO recommends that DOD and the Coast Guard develop interim policies for the Gold Star Advocate Program and determine goals and metrics for its outreach; and that DOD develop additional indicators for better evaluating its training. DOD and DHS on behalf of the Coast Guard concurred with the recommendations.
We last provided you an overview of federal information security in September 1996. At that time, serious security weaknesses had been identified at 10 of the largest 15 federal agencies, and we concluded that poor information security was a widespread federal problem. We recommended that the Office of Management and Budget (OMB) play a more active role in overseeing agency practices, in part through its role as chair of the then newly established Chief Information Officers (CIO) Council. Subsequently, in February 1997, as more audit evidence became available, we designated information security as a new governmentwide high-risk area in a series of reports to the Congress. During 1996 and 1997, federal information security also was addressed by the President’s Commission on Critical Infrastructure Protection, which had been established to investigate our nation’s vulnerability to both “cyber” and physical threats. In its October 1997 report, Critical Foundations: Protecting America’s Infrastructures, the Commission described the potentially devastating implications of poor information security from a national perspective. The report also recognized that the federal government must “lead by example,” and included recommendations for improving government systems security. This report eventually led to issuance of Presidential Decision Directive 63 in May 1998, which I will discuss in conjunction with other governmentwide security improvement efforts later in my testimony. As hearings by this Committee have emphasized, risks to the security of our government’s computer systems are significant, and they are growing. The dramatic increase in computer interconnectivity and the popularity of the Internet, while facilitating access to information, are factors that also make it easier for individuals and groups with malicious intentions to intrude into inadequately protected systems and use such access to obtain sensitive information, commit fraud, or disrupt operations. Further, the number of individuals with computer skills is increasing, and intrusion, or “hacking,” techniques are readily available. Attacks on and misuse of federal computer and telecommunication resources are of increasing concern because these resources are virtually indispensable for carrying out critical operations and protecting sensitive data and assets. For example, weaknesses at the Department of the Treasury place over a trillion dollars of annual federal receipts and payments at risk of fraud and large amounts of sensitive taxpayer data at risk of inappropriate disclosure; weaknesses at the Health Care Financing Administration place billions of dollars of claim payments at risk of fraud and sensitive medical information at risk of disclosure; and weaknesses at the Department of Defense affect operations such as mobilizing reservists, paying soldiers, and managing supplies. Moreover, Defense’s warfighting capability is dependent on computer-based telecommunications networks and information systems. These and other examples of risks to federal operations and assets are detailed in our report Information Security: Serious Weaknesses Place Critical Federal Operations and Assets at Risk (GAO/AIMD-98-92), which the Committee is releasing today. Although it is not possible to eliminate these risks, understanding them and implementing an appropriate level of effective controls can reduce the risks significantly. Conversely, an environment of widespread control weaknesses may invite attacks that would otherwise be discouraged. As the importance of computer security has increased, so have the rigor and frequency of federal audits in this area. During the last 2 years, we and the agency inspectors general (IG) have evaluated computer-based controls on a wide variety of financial and nonfinancial systems supporting critical federal programs and operations. Many of these audits are now done annually. This growing body of audit evidence is providing a more complete and detailed picture of federal information security than was previously available. The most recent set of audit results that we evaluated—those published since March 1996—describe significant information security weakness in each of the 24 federal agencies covered by our analysis. These weaknesses cover a variety of areas, which we have grouped into six categories of general control weaknesses. The most widely reported weakness was poor control over access to sensitive data and systems. This area of control was evaluated at 23 of the 24 agencies, and weaknesses were identified at each of the 23. Access control weaknesses make systems vulnerable to damage and misuse by allowing individuals and groups to inappropriately modify, destroy, or disclose sensitive data or computer programs for purposes such as personal gain or sabotage. Access controls limit or detect inappropriate access to computer resources (data, equipment, and facilities), thereby protecting them against unauthorized modification, loss, and disclosure. Access controls include physical protections, such as gates and guards, as well as logical controls, which are controls built into software that (1) require users to authenticate themselves through the use of secret passwords or other identifiers and (2) limit the files and other resources that an authenticated user can access and the actions that he or she can execute. In today’s increasingly interconnected computing environment, poor access controls can expose an agency’s information and operations to potentially devastating attacks from remote locations all over the world by individuals with minimal computer and telecommunications resources and expertise. Common types of access control weaknesses included overly broad access privileges inappropriately provided to very large access that was not appropriately authorized and documented; multiple users sharing the same accounts and passwords, making it impossible to trace specific transactions or modifications to an individual; inadequate monitoring of user activity to deter and identify inappropriate actions, investigate suspicious activity, and penalize perpetrators; improperly implemented access controls, resulting in unintended access or gaps in access control coverage; and access that was not promptly terminated or adjusted when users either left an agency or when their responsibilities no longer required them to have access to certain files. The second most widely reported type of weakness pertained to service continuity. Service continuity controls ensure that when unexpected events occur, critical operations continue without undue interruption and critical and sensitive data are protected. In addition to protecting against natural disasters and accidental disruptions, such controls also protect against the growing threat of “cyber-terrorism,” where individuals or groups with malicious intent may attack an agency’s systems in order to severely disrupt critical operations. For this reason, an agency should have (1) procedures in place to protect information resources and minimize the risk of unplanned interruptions and (2) a plan to recover critical operations should interruptions occur. To determine whether recovery plans will work as intended, they should be tested periodically in disaster simulation exercises. Losing the capability to process, retrieve, and protect information maintained electronically can significantly affect an agency’s ability to accomplish its mission. If controls are inadequate, even relatively minor interruptions can result in lost or incorrectly processed data, which can cause financial losses, expensive recovery efforts, and inaccurate or incomplete financial or management information. Service continuity controls were evaluated for 20 of the agencies included in our analysis, and weaknesses were reported for all of these agencies. Common weaknesses included the following: Plans were incomplete because operations and supporting resources had not been fully analyzed to determine which were the most critical and would need to be resumed as soon as possible should a disruption occur. Disaster recovery plans were not fully tested to identify their weaknesses. One agency’s plan was based on an assumption that key personnel could be contacted within 10 minutes of the emergency, an assumption that had not been tested. The third most common type of weakness involved inadequate entitywide security program planning and management. Each organization needs a set of management procedures and an organizational framework for identifying and assessing risks, deciding what policies and controls are needed, periodically evaluating the effectiveness of these policies and controls, and acting to address any identified weaknesses. These are the fundamental activities that allow an organization to manage its information security risks cost effectively, rather than reacting to individual problems ad hoc only after a violation has been detected or an audit finding has been reported. Weaknesses were reported for all 17 of the agencies for which this area of control was evaluated. Many of these agencies had not developed security plans for major systems based on risk, had not formally documented security policies, and had not implemented a program for testing and evaluating the effectiveness of the controls they relied on. The fourth most commonly reported type of weakness was inadequate segregation of duties. Segregation of duties refers to the policies, procedures, and organizational structure that help ensure that one individual cannot independently control all key aspects of a process or computer-related operation and thereby conduct unauthorized actions or gain unauthorized access to assets or records without detection. For example, one computer programmer should not be allowed to independently write, test, and approve program changes. Segregation of duties is an important internal control concept that applies to both computerized and manual processes. However, it is especially important in computerized environments, since an individual with overly broad access privileges can initiate and execute inappropriate actions, such as software changes or fraudulent transactions, more quickly and with greater impact than is generally possible in a nonautomated environment. Although segregation of duties alone will not ensure that only authorized activities occur, inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, that improper program changes could be implemented, and that computer resources could be damaged or destroyed. Controls to ensure appropriate segregation of duties consist mainly of documenting, communicating, and enforcing policies on group and individual responsibilities. Enforcement can be accomplished by a combination of physical and logical access controls and by effective supervisory review. Segregation of duties was evaluated at 17 of the 24 agencies. Weaknesses were identified at 16 of these agencies. Common problems involved computer programmers and operators who were authorized to perform a wide variety of duties, thus enabling them to independently modify, circumvent, and disable system security features. For example, at one agency, all users of the financial management system could independently perform all of the steps needed to initiate and complete a payment—obligate funds, record vouchers for payment, and record checks for payment—making it relatively easy to make a fraudulent payment. The fifth most commonly reported type of weakness pertained to software development and change controls. Such controls prevent unauthorized software programs or modifications to programs from being implemented. Key aspects are ensuring that (1) software changes are properly authorized by the managers responsible for the agency program or operations that the application supports, (2) new and modified software programs are tested and approved prior to their implementation, and (3) approved software programs are maintained in carefully controlled libraries to protect them from unauthorized changes and ensure that different versions are not misidentified. Such controls can prevent both errors in software programming as well as malicious efforts to insert unauthorized computer program code. Without adequate controls, incompletely tested or unapproved software can result in erroneous data processing that depending on the application, could lead to losses or faulty outcomes. In addition, individuals could surreptitiously modify software programs to include processing steps or features that could later be exploited for personal gain or sabotage. Weaknesses in software program change controls were identified for 14 of the 18 agencies where such controls were evaluated. One of the most common types of weakness in this area was undisciplined testing procedures that did not ensure that implemented software operated as intended. In addition, procedures did not ensure that emergency changes were subsequently tested and formally approved for continued use and that implementation of locally-developed unauthorized software programs was prevented or detected. The sixth area pertained to operating system software controls. System software controls limit and monitor access to the powerful programs and sensitive files associated with the computer systems operation. Generally, one set of system software is used to support and control a variety of applications that may run on the same computer hardware. System software helps control and coordinate the input, processing, output, and data storage associated with all of the applications that run on the system. Some system software can change data and programs without leaving an audit trail or can be used to modify or delete audit trails. Examples of system software include the operating system, system utilities, program library systems, file maintenance software, security software, data communications systems, and database management systems. Controls over access to and modification of system software are essential in providing reasonable assurance that operating system-based security controls are not compromised and that the system will not be impaired. If controls in this area are inadequate, unauthorized individuals might use system software to circumvent security controls to read, modify, or delete critical or sensitive information and programs. Also, authorized users of the system may gain unauthorized privileges to conduct unauthorized actions or to circumvent edits and other controls built into application programs. Such weaknesses seriously diminish the reliability of information produced by all of the applications supported by the computer system and increase the risk of fraud, sabotage, and inappropriate disclosures. Further, system software programmers are often more technically proficient than other data processing personnel and, thus, have a greater ability to perform unauthorized actions if controls in this area are weak. A common type of system software control weakness reported was insufficiently restricted access that made it possible for knowledgeable individuals to disable or circumvent controls in a wide variety of ways. For example, at one facility, 88 individuals had the ability to implement programs not controlled by the security software, and 103 had the ability to access an unencrypted security file containing passwords for authorized users. Significant system software control weaknesses were reported at 9 of the 24 agencies. In the remaining 15 agencies, this area of control had not been fully evaluated. We are working with the IGs to ensure that it receives adequate coverage in future evaluations. I would now like to describe in greater detail weaknesses at the two agencies that you have chosen to feature today: the Department of Veterans Affairs and the Social Security Administration. The Department of Veterans Affairs (VA) relies on a vast array of computer systems and telecommunications networks to support its operations and store the sensitive information the department collects in carrying out its mission. In a report released today, we identify general computer control weaknesses that place critical VA operations, such as financial management, health care delivery, benefit payments, life insurance services, and home mortgage loan guarantees, at risk of misuse and disruption. In addition, sensitive information contained in VA’s systems, including financial transaction data and personal information on veteran medical records and benefit payments, is vulnerable to inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction—possibly occurring without detection. VA operates the largest health care delivery system in the United States and guarantees loans on about 20 percent of the homes in the country. In fiscal year 1997, VA spent over $17 billion on medical care and processed over 40 million benefit payments totaling over $20 billion. The department also provided insurance protection through more than 2.5 million policies that represented about $24 billion in coverage at the end of fiscal year 1997. In addition, the VA systems support the department’s centralized accounting and payroll functions. In fiscal year 1997, VA’s payroll was almost $11 billion, and the centralized accounting system generated over $7 billion in additional payments. In our report, we note significant problems related to the department’s control and oversight of access to its systems. VA did not adequately limit the access of authorized users or effectively manage user identifications (ID) and passwords. At one facility, the security software was implemented in a manner that provided all of the more than 13,000 users with the ability to access and change sensitive data files, read system audit information, and execute powerful system utilities. Such broad access authority increased the risk that users could circumvent the security software to alter payroll and other payment transactions. This weakness could also provide users the opportunity to access and disclose sensitive information on veteran medical records, such as diagnoses, procedures performed, inpatient admission and discharge data, or the purpose of outpatient visits, and home mortgage loans, including the purpose, loan balance, default status, foreclosure status, and amount delinquent. At two facilities, we found that system programmers had access to both system software and financial data. This type of access could allow the programmers to make unauthorized changes to benefit payment information without being detected. At four of the five facilities we visited, we identified user ID and password management control weaknesses that increased the risk of passwords being compromised to gain unauthorized access. For example, IDs for terminated or transferred employees were not being disabled, many passwords were common words that could be easily guessed, numerous staff were sharing passwords, and some user accounts did not have passwords These types of weaknesses make the financial transaction data and personal information on veteran medical records and benefits stored on these systems vulnerable to misuse, improper disclosure, and destruction. We demonstrated these vulnerabilities by gaining unauthorized access to VA systems and obtaining information that could have been used to develop a strategy to alter or disclose sensitive patient information. We also found that the department had not adequately protected its systems from unauthorized access from remote locations or through the VA network. The risks created by these issues are serious because, in VA’s interconnected environment, the failure to control access to any system connected to the network also exposes other systems and applications on the network. While simulating an outside hacker, we gained unauthorized access to the VA network. Having obtained this access, we were able to identify other systems on the network, which makes it much easier for outsiders with no knowledge of VA’s operations or infrastructure to penetrate the department’s computer resources. We used this information to access the log-on screen of another computer that contained financial and payroll data, veteran loan information, and sensitive information on veteran medical records for both inpatient and outpatient treatment. Such access to the VA network, when coupled with VA’s ineffective user ID and password management controls and available “hacker” tools, creates a significant risk that outside hackers could gain unauthorized access to this information. At two facilities, we were able to demonstrate that network controls did not prevent unauthorized users with access to VA facilities or authorized users with malicious intent from gaining improper access to VA systems. We were able to gain access to both mainframe and network systems that could have allowed us to improperly modify payments related to VA’s loan guaranty program and alter sensitive veteran compensation, pension, and life insurance benefit information. We were also in a position to read and modify sensitive data. The risks created by these access control problems were also heightened significantly because VA was not adequately monitoring its systems for unusual or suspicious access activities. In addition, the department was not providing adequate physical security for its computer facilities, assigning duties in such a way as to properly segregate functions, controlling changes to powerful operating system software, or updating and testing disaster recovery plans to ensure that the department could maintain or regain critical functions in emergencies. Many similar access and other general computer control weaknesses had been reported in previous years, indicating that VA’s past actions have not been effective on a departmentwide basis. Weaknesses associated with restricting access to sensitive data and programs and monitoring access activity have been consistently reported in IG and other internal reports. A primary reason for VA’s continuing general computer control problems is that the department does not have a comprehensive computer security planning and management program in place to ensure that effective controls are established and maintained and that computer security receives adequate attention. An effective program would include guidance and procedures for assessing risks and mitigating controls, and monitoring and evaluating the effectiveness of established controls. However, VA had not clearly delineated security roles and responsibilities; performed regular, periodic assessments of risk; implemented security policies and procedures that addressed all aspects of VA’s interconnected environment; established an ongoing monitoring program to identify and investigate unauthorized, unusual, or suspicious access activity; or instituted a process to measure, test, and report on the continued effectiveness of computer system, network, and process controls. In our report to VA, we recommended that the Secretary direct the CIO to (1) work with the other VA CIOs to address all identified computer control weaknesses, (2) develop and implement a comprehensive departmentwide computer security planning and management program, (3) review and assess computer control weaknesses identified throughout the department and establish a process to ensure that these weaknesses are addressed, and (4) monitor and periodically report on the status of improvements to computer security throughout the department. In commenting on our report, VA agreed with these recommendations and stated that the department would immediately correct the identified computer control weaknesses and implement oversight mechanisms to ensure that these problems do not reoccur. VA also stated that the department was developing plans to correct deficiencies previously identified by the IG and by internal evaluations and that the VA CIO will report periodically on VA’s progress in correcting computer control weaknesses throughout the department. We have discussed these actions with VA officials, and, as part of our upcoming review, we will be examining completed actions and evaluating their effectiveness. The Social Security Administration (SSA) relies on extensive information processing resources to carry out its operations, which, for 1997, included payments that totaled approximately $390 billion to 50 million beneficiaries. This was almost 25 percent of the $1.6 trillion in that year’s federal expenditures. SSA also issues social security numbers and maintains earnings records and other personal information on virtually all U. S. citizens. Through its programs, SSA processes approximately 225 million wage and tax statements (W-2 forms) annually for approximately 138 million workers. Few federal agencies affect so many people. The public depends on SSA to protect trust fund revenues and assets from fraud and to protect sensitive information on individuals from inappropriate disclosure. In addition, many current beneficiaries rely on the uninterrupted flow of monthly payments to meet their basic needs. In November 1997, the SSA IG reported serious weaknesses in controls over information resources, including access, continuity of service, and software program changes that unnecessarily place these assets and operations at risk. These weaknesses demonstrate the need for SSA to do more to assure that adequate controls are provided for information collected, processed, transmitted, stored, or disseminated in general support systems or major applications. Internal control testing identified information protection-related weaknesses throughout SSA’s information systems environment. Affected areas included SSA’s distributed computer systems as well as its mainframe computers. These vulnerabilities exposed SSA and its computer systems to external and internal intrusion; subjected sensitive SSA information related to social security numbers, earnings, disabilities, and benefits to potential unauthorized access, modification, and/or disclosure; and increased the risks of fraud, waste, and abuse. Access control and other weaknesses also increased the risks of introducing errors or irregularities into data processing operations. For example, auditors identified numerous employee user accounts on SSA networks, including dial-in modems, that were either not password protected or were protected by easily guessed passwords. These weaknesses increased the risk that unauthorized outsiders could access, modify, and delete data; create, modify, and delete users; and disrupt services on portions of SSA’s network. In addition, auditors identified network control weaknesses that could result in accidental or intentional alteration of birth and death records, as well as unauthorized disclosure of personal data and social security numbers. These weaknesses were made worse because security awareness among employees was not consistent at SSA. As a result, SSA was susceptible to security penetration techniques, such as social engineering, whereby users disclose sensitive information in response to seemingly legitimate requests from strangers either over the phone or in person. The auditors reported that during testing, they were able to secure enough information through social engineering to allow access to SSA’s network. Further, by applying intrusion techniques in penetration tests, auditors gained access to various SSA systems that would have allowed them to view user data, add and delete users, modify network configurations, and disrupt service to users. By gaining access through such tests, auditors also were able to execute software tools that resulted in their gaining access to SSA electronic mailboxes, public mailing lists, and bulletin boards. This access would have provided an intruder the ability to read, send, or change e-mail exchanged among SSA users, including messages from or to the Commissioner. In addition to access control weaknesses and inadequate user awareness, employee duties at SSA were not appropriately segregated to reduce the risk that an individual employee could introduce and execute unauthorized transactions without detection. As a result, certain employees had the ability to independently carry out actions such as initiating and adjudicating claims or moving and reinstating earnings data. This weakness was exacerbated because certain mitigating monitoring or detective controls could not be relied on. For example, SSA has developed a system that allows supervisors to review sensitive or potentially fraudulent activity. However, key transactions or combinations of transactions are not being reviewed or followed up promptly and certain audit trail features have not been activated. Weaknesses such as those I have just described increase the risk that a knowledgeable individual or group could fraudulently obtain payments by creating fictitious beneficiaries or increasing payment amounts. Similarly, such individuals could secretly obtain sensitive information and sell or otherwise use it for personal gain. The recent growth in “identity theft,” where personal information is stolen and used fraudulently by impersonators for purposes such as obtaining and using credit cards, has created a market for such information. According to the SSA IG’s September 30, 1997, report to the Congress (included in the SSA’s fiscal year 1997 Accountability Report), 29 criminal convictions involving SSA employees were obtained during fiscal year 1997, most of which involved creating fictitious identities, fraudulently selling SSA cards, misappropriating refunds, or abusing access to confidential information. The risk of abuse by SSA employees is of special concern because, except for a very few individuals, SSA does not restrict access to view sensitive data based on a need-to-know basis. As a result, a large number of SSA employees can browse enumeration, earnings, and claims records for many other individuals, including other SSA employees, without detection. SSA provides this broad access because it believes that doing so facilitates its employees’ ability to carry out SSA’s mission. An underlying factor that contributes to SSA’s information security weaknesses is inadequate entitywide security program planning and management. Although SSA has an entitywide security program in place, it does not sufficiently address all areas of security, including dial-in access, telecommunications, certain major mainframe system applications, and distributed systems outside the mainframe environment. A lack of such an entitywide program impairs each group’s ability to develop a security structure for its responsible area and makes it difficult for SSA management to monitor agency performance in this area. In two separate letters to SSA management, the IG and its contractor made recommendations to address the weaknesses reported in November 1997. SSA has agreed with the majority of the recommendations and is developing related corrective action plans. Substantively improving federal information security will require efforts at both the individual agency level and at the governmentwide level. Agency managers are primarily responsible for securing the information resources that support their critical operations. However, central oversight also is important to monitor agency performance and address crosscutting issues that affect multiple agencies. Over the last 2 years, a number of efforts have been initiated, but additional actions are still needed. First, it is important that agency managers implement comprehensive programs for identifying and managing their security risks in addition to correcting specific reported weaknesses. Over the last 2 years, our reports and IG reports have included scores of recommendations to individual agencies, and agencies have either implemented or planned actions to address most of the specific weaknesses. However, there has been a tendency to react to individual audit findings as they were reported, with little ongoing attention to the systemic causes of control weaknesses. In short, agencies need to move beyond addressing individual audit findings and supplement these efforts with a framework for proactively managing the information security risks associated with their operations. Such a framework includes determining which risks are significant, assigning responsibility for taking steps to reduce risks, and ensuring that these steps are implemented effectively and remain effective over time. Without a management framework for carrying out these activities, information security risks to critical operations may be poorly understood; responsibilities may be unclear and improperly implemented; and policies and controls may be inadequate, ineffective, or inconsistently applied. In late 1996, at the Committee’s request, we undertook an effort to identify potential solutions to this problem, including examples that could supplement existing guidance to agencies. To do this, we studied the security management practices of eight nonfederal organizations known for their superior security programs. These organizations included two financial services corporations, a regional electric utility, a state university, a retailer, a state agency, a computer vendor, and an equipment manufacturer. We found that these organizations managed their information security risks through a cycle of risk management activities, and we identified 16 specific practices that supported these risk management principles. These practices are outlined in an executive guide titled Information Security Management: Learning From Leading Organizations (GAO/AIMD-98-68), which was released by the Committee in May 1998 and endorsed by the CIO Council. Upon publication, the guide was distributed to all major agency heads, CIOs, and IGs. The guide describes a framework for managing information security risks through an ongoing cycle of activities coordinated by a central focal point. Such a framework can help ensure that existing controls are effective and that new, more advanced control techniques are prudently and effectively selected and implemented as they become available. The risk management cycle and the 16 practices supporting this cycle of activity are depicted in the following figures. In addition to effective security program planning and management at individual agencies, governmentwide leadership, coordination, and oversight are important to ensure that federal executives understand the risks to their operations, monitor agency performance in mitigating these risks, ensure implementation of needed improvements, and facilitate actions to resolve issues affecting multiple agencies. To help achieve this, the Paperwork Reduction Act of 1980 made OMB responsible for developing information security policies and overseeing related agency practices. In 1996, we reported that OMB’s oversight consisted largely of reviewing selected agency system-related projects and participating in various federal task forces and working groups. While these activities are important, we recommended that OMB play a more active role in overseeing agency performance in the area of information security. Since then, OMB’s efforts have been supplemented by those of the CIO Council. In late 1997, the Council, under OMB’s leadership, designated information security as one of six priority areas and established a Security Committee, an action that we had recommended in 1996. The Security Committee, in turn, has established relationships with other federal entities involved in security and developed a very preliminary plan. While the plan does not yet comprehensively address the various issues affecting federal information security or provide a long-range strategy for improvement, it does cover important areas by specifying three general objectives: promote awareness and training, identify best practices, and address technology and resource issues. During the first half of 1998, the committee has sponsored a security awareness seminar for federal agency officials and developed plans for improving agency access to incident response services. More recently, in May 1998, Presidential Decision Directive (PDD) 63 was issued in response to recommendations made by the President’s Commission on Critical Infrastructure Protection in October 1997. PDD 63 established entities within the National Security Council, the Department of Commerce, and the Federal Bureau of Investigation to address critical infrastructure protection, including federal agency information infrastructures. Specifically, the directive states that “the Federal Government shall serve as a model to the private sector on how infrastructure assurance is best achieved” and that federal department and agency CIOs shall be responsible for information assurance. The directive requires each department and agency to develop a plan within 180 days from the issuance of the directive in May 1998 for protecting its own critical infrastructure, including its cyber-based systems. These plans are then to be subject to an expert review process. Other key provisions related to the security of federal information systems include a review of existing federal, state, and local bodies charged with enhanced collection and analysis of information on the foreign information warfare threat to our critical infrastructures; establishment of a National Infrastructure Protection Center within the Federal Bureau of Investigation to facilitate and coordinate the federal government’s investigation and response to attacks on its critical infrastructures; assessments of U. S. government systems’ susceptibility to interception and exploitation; and incorporation of agency infrastructure assurance functions in agency strategic planning and performance measurement frameworks. We plan to follow up on the these activities as more specific information becomes available. The CIO Council’s efforts and the issuance of PDD 63 indicate that senior federal officials are increasingly concerned about information security risks and are acting on these concerns. Improvements are needed both at the individual agency level and in central oversight, and coordinated actions throughout the federal community will be needed to substantively improve federal information security. What needs to emerge is a coordinated and comprehensive strategy that incorporates the worthwhile efforts already underway and takes advantage of the expanded amount of evidence that has become available in recent years. The objectives of such a strategy should be to encourage agency improvement efforts and measure their effectiveness through an appropriate level of oversight. This will require a more structured approach for (1) ensuring that risks are fully understood, (2) promoting use of the most cost-effective control techniques, (3) testing and evaluating the effectiveness of agency programs, and (4) acting to address identified deficiencies. This approach needs to be applied at individual departments and agencies and in a coordinated fashion across government. In our report on governmentwide information security that is being released today, we recommended that the Director of OMB and the Assistant to the President for National Security Affairs develop such a strategy. As part of our recommendation, we stated that such a strategy should ensure that executive agencies are carrying out the responsibilities outlined in laws and regulations requiring them to protect the security of their information resources; clearly delineate the roles of the various federal organizations with responsibilities related to information security; identify and rank the most significant information security issues facing federal agencies; promote information security risk awareness among senior agency officials whose critical operations rely on automated systems; identify and promote proven security tools, techniques, and management best practices; ensure the adequacy of information technology workforce skills; ensure that the security of both financial and nonfinancial systems is adequately evaluated on a regular basis; include long-term goals and objectives, including time frames, priorities, and annual performance goals; and provide for periodically evaluating agency performance from a governmentwide perspective and acting to address shortfalls. In commenting on a draft of our report, the OMB’s Acting Deputy Director for Management said that a plan is currently being developed by OMB and the CIO Council, working with the National Security Council. The comments stated that the plan is to develop and promote a process by which government agencies can (1) identify and assess their existing security posture, (2) implement security best practices, and (3) set in motion a process of continued maintenance. The comments also describe plans for a CIO Council-sponsored interagency assist team that will review agency security programs. As of September 17, a plan had not yet been finalized and, therefore, was not available for our review, according to an OMB official involved in the plan’s development. We intend to review the plan as soon as it is available. Although information security, like other types of safeguards and controls, is an ongoing concern, it is especially important, now and in the coming 18 months, as we approach and deal with the computer problems associated with the Year 2000 computing crisis. The Year 2000 crisis presents a number of security problems with which agencies must be prepared to contend. For example, it is essential that agencies improve the effectiveness of controls over their software development and change process as they implement the modifications needed to make their systems Year 2000 compliant. Many agencies have significant weaknesses in this area, and most are under severe time constraints to make needed software changes. As a result, there is a danger that already weak controls will be further diminished if agencies bypass or truncate them in an effort to speed the software modification process. This increases the risk that erroneous or malicious code will be implemented or that systems that do not adequately support agency needs will be rushed into use. Also, agencies should strive to improve their abilities to detect and respond to anomalies in system operations that may indicate unauthorized intrusions, sabotage, misuse, or damage that could affect critical operations and assets. As illustrated by VA and SSA, many agencies are not taking full advantage of the system and network monitoring tools that they already have and many have not developed reliable procedures for responding to problems once they are identified. Without such incident detection and response capabilities, agencies may not be able to readily distinguish between malicious attacks and system-induced problems, such as those stemming from Year 2000 noncompliance, and respond appropriately. The Year 2000 crisis is the most dramatic example yet of why we need to protect critical computer systems because it illustrates the government’s widespread dependence on these systems and the vulnerability to their disruption. However, the threat of disruption will not end with the advent of the new millennium. There is a longer-term danger of attack from malicious individuals or groups, and it is important that our government design long-term solutions to this and other security risks. Mr. Chairman, this concludes our statement. We would be happy to respond to any questions you or other members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the state of information security in the federal government, focusing on the Department of Veterans Affairs' (VA) and the Social Security Administration's (SSA) efforts to develop and maintain an effective security management program. GAO noted that: (1) as the importance of computer security has increased, so have the rigor and frequency of federal audits in this area; (2) during the last 2 years, GAO and the agency inspectors general (IG) have evaluated computer-based controls on a wide variety of financial and nonfinancial systems supporting critical federal programs and operations; (3) the most recent set of audit results described significant information security weakness in each of the 24 federal agencies covered by GAO's analysis; (4) these weaknesses cover a variety of areas, which GAO has grouped into six categories of general control weaknesses; (5) in GAO's report, it noted significant problems related to VA's control and oversight of access to its systems; (6) VA did not adequately limit the access of authorized users or effectively manage user identifications and passwords; (7) GAO also found that the department had not adequately protected its systems from unauthorized access from remote locations or through the VA network; (8) a primary reason for VA's continuing general computer control problems is that the department does not have a comprehensive computer security planning and management program in place to ensure that effective controls are established and maintained and that computer security receives adequate attention; (9) the public depends on SSA to protect trust fund revenues and assets from fraud and to protect sensitive information on individuals from inappropriate disclosure; (10) in addition, many current beneficiaries rely on the uninterrupted flow of monthly payments to meet their basic needs; in November 1997, the SSA IG reported serious weaknesses in controls over information resources, including access, continuity of service, and software program changes that unnecessarily place these assets and operations at risk; (11) internal control testing identified information protection-related weaknesses throughout SSA's information systems environment; (12) an underlying factor that contributes to SSA's information security weaknesses is inadequate entitywide security program planning and management; (13) substantively improving federal information security will require efforts at both the individual agency level and at the governmentwide level; and (14) over the last 2 years, a number of efforts have been initiated, but additional actions are still needed.
Quality assurance has a simple goal: to ensure that products perform the way they are supposed to. For many years, the traditional way DOD and commercial companies achieved quality was through systematic final inspection. But now, intense competition has led some U.S. companies to adopt total quality management practices that are prevention based. Consequently, quality assurance has taken on a broader meaning, to include virtually all key design and engineering elements during development, the transition to production, and production itself. There is general agreement across government and industry that DOD’s inspection-based quality assurance practices have added unnecessary costs to acquisitions because they require DOD and contractor personnel and resources for oversight that are separate from the production process. Until recently, DOD’s quality requirements were based on MIL-Q-9858A, a military standard established in 1963. This standard requires a contractor to establish a quality program with documented procedures and processes that are subject to approval by government representatives throughout all areas of contract performance. Quality is theoretically ensured by requiring both the contractor and the government to monitor and inspect products. In June 1994, the Secretary of Defense announced that commercial quality standards such as ISO-9000 should replace MIL-Q-9858A where it makes sense. DOD will not require MIL-Q-9858A on contracts awarded after October 1996. Until then, DOD will be approving the quality standards on a case-by-case basis at the request of industry. The Defense Contract Management Command (DCMC) has primary responsibility for setting and overseeing quality assurance standards within DOD. As of September 1995, DCMC had about 5,000 quality specialists located in contractor facilities or in regional offices across the country. DOD faces a formidable challenge in changing its quality assurance culture. This culture has been characterized by a narrow approach to quality assurance, in both DOD and the defense industry, which has led to a focus on detecting defects and recording corrective actions. In the past, DOD’s practices have reflected a narrow approach to quality assurance. Responsibility for meeting military quality standard MIL-Q-9858A has been assigned to one organization—DCMC, which is outside much of the acquisition decision-making process. DCMC has quality specialists stationed in contractor plants across the country to inspect material and end items. These specialists have relied heavily on quality system documentation, various reviews and audits of the quality process, and corrective action plans submitted by contractor quality assurance personnel to ensure quality on weapon systems. Based on information from studies performed for the Secretary of Defense, we estimate that the extra cost associated with military-unique quality assurance requirements for DOD acquisitions is $1.5 billion annually. The studies assessed the cost impact of DOD regulations on contractors and DOD and found that the contractors’ cost to implement MIL-Q-9858A and to comply with DOD requirements represented at least 1.7 percent of DOD’s acquisition cost, or about $1 billion. Most of this cost occurred as the result of contractor quality assurance and operations personnel devoting time to such activities as preparing quality plans and procedures, conducting and documenting inspections, documenting deviations, proposing corrective actions to government concerns, and supporting government audits and reviews. Further, DOD’s own costs for quality assurance oversight were about $687 million annually. This estimate does not include what DOD has spent to correct the manufacturing and quality problems that have contributed to historical cost and schedule overruns on weapon system production programs. These past problems occurred partly because manufacturing processes were not considered during the design phase of the program, and inspection, rather than process control, was the predominant method of ensuring quality. Like many commercial manufacturers, DOD is now being expected to cut costs and do more with less. Commercial manufacturers have adopted far-reaching quality strategies as one way to become more competitive, efficient, and economical. Even though DOD faces the same challenge, particularly since it can no longer expect the budget increases it had during the 1980s, it has yet to effectively achieve the same level of efficiency from its manufacturers. A major difference between DOD and commercial manufacturers is that DOD has, until recently, maintained its practice of inspecting rather than designing quality into a product while world-class companies have broadened their definition of quality to include design. DOD is attempting to change its approach to quality by including design in the definition of quality. In translating this approach into practice, DOD will have to overcome a history in which many weapon system acquisitions have encountered significant cost and schedule overruns because of design and manufacturing problems. These problems usually resulted from acquisition strategies that began production before the design was complete and key manufacturing processes were in place and tested for capability. Some examples we have reported on follow: In 1985, we reported on four major weapon system acquisitions—the Army’s Copperhead projectile and Blackhawk helicopter, and the Navy’s High Speed Anti-Radiation Missile and Tomahawk missile—that had encountered substantial problems in early production, resulting in cost overruns and schedule delays. At least part of these cost and schedule overruns were caused by several untried manufacturing processes that were not studied to see if they could produce quality components. Symptoms of these problems were high scrap rates, parts shortages, and changes to engineering drawings.In 1988 and 1990, we reported that the B-2 bomber program’s immature design led to manufacturing problems in production that, in turn, led to schedule delays and cost increases. The contractors initiated thousands of changes to engineering drawings, some of which required new parts and tooling. Because the bomber was not ready for production, actual labor hours exceeded planned labor hours on the first three aircraft by 84, 86, and 94 percent, respectively, and the three major contractors estimated it would take over 10 million quality assurance labor hours to develop and produce 21 production aircraft, not including hours estimated for government oversight. These manufacturing problems delayed the first flight of the B-2 by 19 months and contributed to significant cost increases.In 1991, we reported that 12 tactical missile programs had cost and schedule overruns, partly because program offices had not adequately considered the risk associated with the weapons’ design, development, and production. One missile, the Advanced Medium Range Air-to-Air Missile, encountered problems in transitioning to production partly because certain electronic components proved too complex and had to be redesigned. This contributed to cost increases of 285 percent and a 5-year delay to the missile’s operational capability date.In 1994, we reported that quality problems with the C-17 program increased unit cost to production aircraft and delayed scheduled deliveries. Labor hours for rework and repair of production items made up 40 percent of the labor on the first five production C-17s, and scrap, rework, and repair cost about $44 million in 1993. In addition to these costs, some production aircraft were delivered to the Air Force with unfinished work and known deficiencies that had to be corrected after government acceptance. The Defense Science Board reviewed the C-17 program in 1993 and concluded that the production schedule could not be maintained unless the contractor changed its manufacturing and quality assurance processes. An Industry Review Panel on Specifications and Standards found that the C-17 production process had many quality problems that were adding cost to the program. It found that the production aircraft were being produced in a “development environment,” with unqualified processes, and with a reactive rather than proactive quality management system that did not analyze the causes of quality problems. It also found that a high number of engineering changes were making production less efficient. The review panel concluded that the program had an “inspection mentality” and that only 10 percent of quality cost was being spent on prevention. These examples represent a persistent problem in DOD’s major acquisitions. All of the programs began producing weapon systems before their key characteristics were fully designed and the key processes for building the system were understood. Some started production before doing a significant amount of testing to determine if these systems would perform their required mission. The consequences of beginning production before completing testing have repeatedly included procurement of substantial inventories of unsatisfactory weapons that require costly modifications to achieve satisfactory performance and, in some cases, deployment of substandard systems to combat forces. These examples do not necessarily condemn DOD to a repetition of the past or prejudge the potential success of DOD’s current efforts to better manage quality. However, they do underscore the challenge DOD faces in changing quality assurance practices. The systems in the examples were developed and produced using inspection-oriented quality assurance practices and significant DOD oversight. Yet, in each case, we reported that beginning manufacturing before the design was understood and manufacturing processes were controlled led to quality problems, cost increases, and schedule delays. The narrower interpretation of quality assurance that prevailed at the time may not have included the various design and engineering elements that played a part in the weapons’ eventual problems in production. However, the broader view of quality assurance practiced by leading commercial manufacturers includes these elements and thus covers a much larger range of responsibilities. Many companies have effectively evolved from an inspection-oriented quality assurance process, and the culture and infrastructure to support it, to a standard in which quality is an integral part of each stage of the design and production process. The difference between how a manufacturer operates on a military contract versus how it operates on a commercial contract is often startling. For example, we visited a company that manufactured military and commercial products with similar specifications and uses; both manufacturing processes were in the same building. On one side of the hallway, the commercial process used automation and process control throughout the production process to continually reduce nonvalue-added inspections. On the other side of the hallway, the military process included two large test facilities at the end of the production process, used at least four test stations, and added about 10 days to the process. Company officials told us that the military process continued to have 100-percent end-item inspection, even though the quality of these products was not in question. In response to increased competition in the 1980s, companies had to dramatically improve quality while reducing cost. They accomplished this by shifting paradigms. Rather than focusing on identifying and correcting problems, they began trying to focus on preventing the problems. Quality assurance changed from being a postmanufacturing step, done at the end of each process, to being part of the process itself. This was a significant culture change for these companies. Figure 1 illustrates the shift in paradigms from inspection-oriented to prevention-oriented quality practices. Traditional quality assurance techniques relied upon many after-the-fact inspections, increasing costs in time and money. To remain profitable, manufacturers switched from detection to prevention-based quality strategies. These strategies teamed suppliers, manufacturing staff, and engineers to design quality into the product and to identify and control key characteristics. Prevention-based process control replaced end-item inspections. To reduce their costs and gain an edge on the competition, companies have found that they have had to not only establish a basic commercial quality system, such as ISO-9000, as their baseline, but also consistently exceed it. Most of the companies we visited had obtained ISO-9000 certification, a basic quality management system that commercial customers are beginning to require. This standard ensures that a manufacturer has a well-documented commercial quality system. In addition, they all had developed or adopted advanced quality concepts that brought the concept of quality to the design phase of a product’s life. They began by using two advanced concepts called design for manufacturing and process control techniques. After a company had successfully incorporated each technique, it began to eliminate inspections—and the cost associated with them—and significantly reduced the amount of defects in its products. A third ingredient for success involved developing relationships with key suppliers, which ensured that parts and subcomponents were appropriate for a product’s design and arrived for production at consistently high quality and expected cost. Companies we visited had dramatic reductions in product defects—ranging from 34 to 90 percent—resulting from these techniques. Appendixes II and V contain detailed examples. Design for manufacturing represents a culture change that involves a whole new mindset. Rather than continuing the old practice of engineers designing a product in isolation and then handing it off to the manufacturing process, design for manufacturing involves all stakeholders in the process. The stakeholders form cross-functional teams that include representatives from the customer, marketing, research and development, engineering, manufacturing, key suppliers, quality assurance, finance, and customer support. Under the old practice, a product would require many design changes in full-scale production, creating additional defects, rework, and scrapped material. But now, the teams identify the requirements for a product’s performance and ensure that manufacturing processes are in place to meet those requirements within specified cost targets before production begins. Their objective is to build quality in up front rather than fix problems during some stage of the manufacturing process or discover there is no profitability. These cross-functional teams conduct phased development processes to ensure that a product’s design is producible and profitable. Continuous communication between the engineers who design the product and the people responsible for manufacturing it is a key to the process. Using design for manufacturing techniques, these companies review projects to prevent a potentially unprofitable design from entering full-scale production. Teams use modeling and prototyping to determine the capability of existing manufacturing processes. As the development process continues, team members must make trade-offs between a product’s performance and its cost to meet strategic targets. For example, Texas Instruments’ semiconductor facility reviews a product’s potential profitability at each milestone of a five-phased development process. If any of these reviews indicate that cost targets cannot be met, the product’s development can be terminated. This eliminates additional investment in full-scale production tooling and facilities. Likewise, Varian Associates’ managers are given cost and schedule targets during the design phase, and special “out-of-bounds” reviews are held for behind-schedule or over-budgeted items throughout the development process. Figure 2 provides a conceptual model of the design for manufacturing process based on what we observed at these two companies. Process control means controlling the production process by checking the quality while the work is being done. Beyond that, companies rely on final inspections of completed lots. Leading companies rely on “total process control,” which demands that every process is controlled by checking the quality during production. Rather than employing a lot of inspectors, however, they entrust the production workers to do it themselves. The companies we visited had implemented similar systems. In addition to a basic quality system, such as ISO-9000, and an emphasis on including design as an element of the quality process, we found they (1) diffused responsibility for quality across the production line and (2) trained employees to use analytical diagnostic tools such as statistical process control, process mapping, or continuous flow manufacturing to maintain predictable processes. The companies had developed or adopted advanced quality concepts that went beyond the basic ISO-9000 standard and that emphasized finding the cause of quality problems by gathering data and then eliminating those causes from the manufacturing process. Once root causes are discovered and processes are controlled, the likelihood of consistently high-quality products is significantly increased and the need for inspection at the end of the production line is reduced. For example, Cherry Electronics uses QS-9000, an advanced quality guide developed jointly by General Motors, Ford, and Chrysler as a required supplement to ISO-9000 for their suppliers. It includes comprehensive instructions for implementing design for manufacturing in new products and using process controls to reduce defects in final products. Cherry is undergoing certification for QS-9000 now and plans to require its own suppliers to become certified in QS-9000 as well. Also, Motorola and Varian Associates used the Malcolm Baldrige National Quality Award criteria to create advanced quality systems for themselves and their suppliers. Representatives from both companies credit the use of these advanced quality guidelines for reducing defects while eliminating the need for end-item inspection. For example, Varian’s Nuclear Magnetic Resonance Instruments business unit heavily inspected both its product and its suppliers’ components coming into final assembly until the mid-1980s. At that time, it instituted process control and gradually reduced the number of its inspectors by 92 percent, from 26 to 2. Similarly, John Deere eliminated its quality assurance department by dividing the production line into “focused factories” and giving the inspection responsibility directly to each product manufacturing team. Material from suppliers typically represents from 60 to 85 percent of the final product cost at the companies we visited. Because of this, companies focused on improving supplier quality so they could eventually reduce material defects and inspections. They began by reducing their number of suppliers. In addition, they created qualification and certification procedures that relied on periodic evaluations of supplier quality systems. Finally, they developed long-term relationships with valued suppliers, increasing communication and creating a partnership, when possible. These practices helped companies reduce suppliers’ defects by as much as 90 percent and inspections of incoming material by as much as 76 percent. Appendix III describes the results of our visits to specific companies that had implemented some of these supplier quality programs. Commercial companies significantly reduced the number of suppliers—from 50 to 85 percent—to manage the cost and quality of incoming material more closely. Generally, a reduction in the supplier base eliminates poorer performers, increases the importance of top performers, and allows closer cooperation with suppliers in continuous improvement practices and in new product development. To help reduce the number of suppliers, companies generally have a qualification and certification process. The qualification process typically begins with an evaluation of each supplier’s current quality system and is often carried out by small cross-functional teams. For example, Varian’s Oncology Systems qualifies all new suppliers through an evaluation process that includes an assessment of suppliers’ defect rates, delivery performance, and a review of their internal quality systems. Once a company determines that a supplier has processes in place to guarantee continued quality, it eliminates inspections and depends on periodic supplier quality information and reviews, generally not more than once annually. For example, Motorola uses its Quality System Reviews to assess the suppliers’ quality system every 2 years and score the supplier on such items as process controls and its ability to develop new products. The reviews are accomplished by a team of four or five quality management experts and take 2 to 3 days. The review guidelines use a scoring approach patterned after the Malcolm Baldrige National Quality Award criteria and include audit guidelines to determine how well a supplier controls quality in new product development and deploys good process controls. Both Motorola and Varian provide consulting teams and optional training courses to certified suppliers who either need or request assistance. Most of the companies had strategies to enter into long-term relationships with world-class, high-quality suppliers, resulting in reduced inspection, planning, rework, and contracting costs. As suppliers progress in meeting cost and quality goals with companies, they are included on design teams, given schedule forecasts for future orders, and included in the company’s quality training program. In addition, longer term contracts typically help to reduce costs further and encourage valued suppliers to deliver material more efficiently. When companies introduce new products, valued suppliers are given the key characteristics of the part they are being asked to supply and then use this information to make sure their manufacturing processes are capable of making the component with consistently high quality. Beginning in the early 1990s, DCMC implemented Process Oriented Contract Administration Services (PROCAS) as a method of moving away from inspection-oriented quality assurance practices and toward process control. The intent of PROCAS is to team with contractors to identify, analyze, and manage production processes and reduce the need for oversight and inspection where it makes sense. In recognition that more advanced quality concepts are needed, a group of DOD officials and defense company representatives have joined in an initiative that, if implemented, could eliminate the costs of redundant quality assurance processes. According to the plan created by the Government and Industry Quality Liaison Panel in April 1995, DOD would allow a contractor to use the same quality management system—based on process controls—for its military contracts that it uses on its commercial contracts. Related to the panel’s goals, DOD’s Single Process Initiative, introduced in December 1995, also supports moving from multiple government-unique management systems to a single management system common to both commercial and military contracts. In addition, DOD has established a policy for using integrated product and process development concepts to integrate all acquisition activities, from product concept through production and field support. How well it will support the panel’s goals remains to be seen. The Liaison Panel’s plan has three overall goals. The first goal is to develop a single quality system that can meet commercial or military quality requirements. It envisions a multitier quality management framework based on criteria similar to the commercial ISO-9000 standard that would be recognized by all government and commercial entities. It begins with a basic quality system that must use process controls from design to delivery and would be reviewed on a regular basis by a single, unified government audit team. As a contractor moves to advanced quality concepts, such as design for manufacturing, and commits to continuous improvement of all its processes, its advanced quality system would be reviewed by government and industry representatives. One objective of this goal is to create a culture that can ensure effective design, production, and delivery processes without constraining suppliers with a set of inspection and oversight requirements. The plan’s second goal is to have government and industry share the most advanced quality concepts in defining requirements for, designing, and manufacturing military products. It envisions a system that will encourage continuous improvement across the industry by identifying the most advanced methods of ensuring quality, making these concepts available to all contractors, and helping train personnel in these concepts. DOD and industry agreed that contractors who can provide evidence of using these advanced concepts to ensure quality should be given credit during source selection. The plan’s third goal is to establish and implement efficient oversight methods. It envisions the government and industry developing a single set of criteria to evaluate contractors’ quality management systems. This criteria should be implemented in a unified evaluation process and should promote effective and efficient innovation. Most importantly, DOD believes the criteria and its implementation method should be accepted by all government customers to avoid inconsistency and duplication of quality evaluations. The Co-chair of the Government and Industry Liaison Panel stated that this evaluation criteria would be used by government audit teams, perhaps from DCMC, in registering contractors’ basic and advanced quality systems rather than current methods of oversight. Appendix IV lists the 15 specific tasks that the panel has undertaken to achieve its goals, their current status, and their projected completion dates. In December 1995, DOD began the Single Process Initiative, managed by DCMC, that allows contractors with military contracts to transition their quality system from MIL-Q 9858A to their best practice, such as a quality system based on ISO-9000, the basic commercial standard. The response to date has been slow; as of June 5, 1996, 38 contractors had submitted proposals to change their quality systems, 5 of which had been approved by DCMC. In discussions with government officials, we found that the biggest reason for this slow response is cultural. The defense community’s traditional quality assurance practices have been inspection-based; the newer, more advanced concepts advocate process control. In addition, some contractor officials believed there is an understandable fear among DOD employees that changes in the quality assurance strategy will translate to a loss of jobs. Also, defense contractors do not see the benefit of implementing quality systems that may result in payment to the government for savings resulting from more efficient practices. DOD’s approval of five contractors’ commercial quality systems means that they are accepted as the basic quality system on all government contracts. This is a positive step toward introducing more advanced quality concepts from the commercial world that force quality considerations during the design phase of an acquisition. However, in agreeing to this change, DCMC reserved the right to review all quality documentation at any time; perform any inspections, verifications, or evaluations it deems necessary; review any supplies from other facilities; and disapprove the quality system or any portion of it. The method or frequency with which DCMC invokes these rights will be as important as a change in the military standard. Also, it is important that DOD continue to move contractors beyond changing basic quality systems, toward advanced concepts. According to DCMC officials, DCMC approved changes to commercial quality systems on 4,255 out of 158,000 existing military contracts—less than 3 percent. It approved ISO-9000 on 398 new military contracts between May 1995 and April 1996. DCMC approves an average of 36,000 contracts per year, meaning that about 1 percent of new contracts in the past year have been approved with ISO-9000 as the quality standard. The commercial world has proven that it can design and manufacture consistently high-quality products by focusing on building quality into the product’s design, understanding the key characteristics of the product and the manufacturing processes necessary to build it, training production personnel to control those processes throughout production, and instituting quality programs with suppliers based on these same principles. We saw convincing evidence that these practices improve product quality and reduce time and labor spent on quality assurance oversight by making it unnecessary. DOD recognizes the benefits of taking a broader approach to quality assurance. It has made some policy changes and is beginning to implement new practices. If DOD provides incentives for implementation of the advanced commercial practices, such as those identified in this report, we believe it can significantly improve quality, reduce costs of its acquisition programs, and apply savings to future modernization efforts. However, we do not believe DOD’s quality assurance culture will change easily. Our conclusion is based on discussions with DOD and service representatives and our review of past acquisitions where DOD and we repeatedly identified unstable design, poor process controls, and poor transition to production as causes for manufacturing quality problems and made recommendations that have not been implemented. Therefore, we recommend that the Secretary of Defense (1) establish measurable steps to implement and monitor the progress of the Government and Industry Quality Liaison Panel plan closely; (2) periodically assess its success in implementing basic standards such as ISO-9000; (3) develop ways to encourage the adoption of advanced quality concepts of design for manufacturing, process controls, and supplier quality programs throughout the defense industry, using commercial practices as a guide; and (4) as suggested in the plan, establish incentives for defense contractors to participate, such as providing credit during source selection for successful implementation of these advanced quality practices. Such selection criteria could also provide world-class companies greater opportunities to participate in DOD’s weapons acquisition programs. To assist DOD in changing its own quality assurance culture, we recommend that the Secretary of Defense expeditiously determine who in DOD’s acquisition community can best oversee the advanced quality functions used by defense contractors in developing and producing weapon systems, using commercial practices as a guide in assigning these functions, and provide all necessary training for any new responsibilities that DOD personnel need to perform. DOD agreed with the intent of the report and stated that it has already implemented many changes in line with our recommendations. (See app. VI.) These include the institution of integrated product and process teams in revised acquisition policy directives and the development of metrics to measure improvement in the overall acquisition process. DOD stated that it believed that no additional actions were required in response to our recommendations at this time. We agree that these and other actions initiated by DOD represent positive steps in reforming the quality assurance process and are consistent with the intent of our recommendations. However, time is needed to determine whether these steps translate into tangible changes in quality assurance practices on individual programs. As we note in the report, a number of major acquisition programs over time have failed to include quality considerations in the design phase. Although the prevailing standard—MIL-Q-9858A—for these programs allowed some latitude for interpreting how quality assurance could be carried out, it was the actual practice—not the guidance—that was more narrowly focused on inspections. These experiences underscore the challenge DOD faces in implementing advanced quality concepts. To develop information for this report, we interviewed and obtained documents from officials of the Office of the Secretary of Defense in the Pentagon and DCMC at Fort Belvoir, Virginia, because of the quality assurance policymaking responsibilities and initiatives that are ongoing at that level. We also held several discussions about quality assurance at the service level. We discussed commercial quality assurance practices with officials from the following commercial manufacturing organizations: Delco Electronics, Milwaukee, Wisconsin; John Deere Horicon Works, Horicon, Wisconsin; Cherry Electrical Products, Waukegan, Illinois; Texas Instruments Lubbock Metal-Oxide Semiconductor, Lubbock, Texas; Varian Nuclear Magnetic Resonance Instruments, Palo Alto, California; Varian Chromatography Systems, Walnut Creek, California; Varian Oncology Systems, Palo Alto, California; and Motorola Paging Products, Boynton Beach, Florida. A detailed description of the companies we visited is contained in appendix I. We then developed a data collection instrument that would assist us in gathering uniform, quantifiable measurements about the techniques these organizations used to improve operations and the results they accomplished. We visited these manufacturing organizations, followed the same agenda with each, and gathered the same data at each organization. We also visited Texas Instruments Defense Systems & Electronics, Dallas, Texas, a defense contractor. In addition, we visited DELCO in Milwaukee, Wisconsin, to discuss differences between military and commercial practices in a broader context. We reviewed literature and various databases provided by the American Productivity Quality Center’s International Benchmarking Clearinghouse to identify manufacturing organizations that have shown significant improvement in quality while reducing oversight functions such as supplier oversight and end-item inspections. Our discussions centered around their overall quality plan and the techniques they used to ensure supplier quality, producibility of new products, and control of their final assembly processes while reducing nonvalue-added cost. We performed our review from August 1995 to July 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to congressional committees; the Secretaries of the Army, the Navy, and the Air Force; the Director, Defense Logistics Agency; and the Director, Office of Management and Budget. We will also make copies available to others upon request. Please contact me at (202) 512-4383 if you or your staff have any questions concerning this report. The major contributors to this report were Michael J. Sullivan, Shari A. Kolnicki, Gordon W. Lusby, and Carolyn S. Blocker. Delco Electronics, Milwaukee, Wisconsin, manufactures products for both commercial and military customers. Delco produces avionics circuit boards and assemblies for several aircraft and light armored vehicles. For its commercial automotive customers, it produces power train, fuel-injection, and remote function computers. John Deere Horicon Works, Horicon, Wisconsin, has worldwide responsibility for engineering lawn and grounds care products. Cherry Electrical Products, Waukegan, Illinois, designs and manufactures a broad range of electrical and electronic components, assemblies, and systems for the automotive industry. Applications include interior use, weather-exposed devices, and under-the-hood functions. Cherry Electrical received the General Motors’ Targets for Excellence Award and two Q1 Preferred Quality Awards from Ford Motor Company’s Diversified Operations. Cherry Electrical has applied for the Malcolm Baldrige award twice. The overall sales of the Cherry Corporation in 1994 were over $339 million. Texas Instruments Lubbock Metal-Oxide Semiconductor, Lubbock, Texas, designs and manufactures integrated circuits that are packaged and assembled overseas. Electronic Programmable Read-Only Memory, a chip that can be used in computers, telephones, automobiles, and other computerized systems, is Texas Instruments’ best selling item. In 1993, the overall sales for Texas Instruments Corporation were $8.5 billion. Varian Chromatography Systems, Walnut Creek, California, supplies gas and liquid chromatography, gas chromatography, and mass spectrometry instruments as high-precision, high-end analytical tools. Its customers are pharmaceutical, chemical, environmental, and other research and developmental laboratories and quality laboratories worldwide. The company manufactures 11 different product lines, with a total volume ranging from 200 to 1,500 units sold per year. The corporation’s total sales in 1995 were over $1.5 billion. Varian Nuclear Magnetic Resonance Instruments (NMRI), Palo Alto, California, manufactures scientific and medical instruments that determine the composition of substances through atomical analysis using nuclear magnetic resonance. Using such instruments, scientists determine the atomic connectivity, atomic spatial orientation, and dynamic characteristics of molecules. The analysis is useful for chemical, biomolecular, material science, and biomedical fields. NMRI’s major markets consist of three customer bases: academia; private industry, including pharmaceutical, basic chemistry, biotechnology, and petrochemical industries; and the federal government. NMRI manufactures low-volume, high-precision products and sells approximately 250 units each year. Prices range from $1,000 to $3 million. It was included in Varian’s corporate application for the Malcolm Baldrige National Quality Award in 1992, 1993, and 1994. Varian Oncology Systems, Palo Alto, California, develops and manufactures linear accelerators as its principal product. Linear accelerators are high-technology instruments that produce X-rays, electrons, and other high-energy particles used to treat cancer. Approximately 30 percent of its remaining business is in computerized treatment planning and information systems. Principal customers include universities, hospitals, government institutions, and free-standing clinics. Varian Oncology’s annual sales are approximately $350 million. Varian Oncology Systems applied for the Malcolm Baldrige Award in 1995. It has received other accolades, such as mention in the “Top Ten” plants in Industry Week in 1993, the California Governor’s Golden State Award for Quality in the Marketplace in 1994, the Best Factory Award from Management Today in 1992, and the European Quality Award Commendation in 1993. Motorola’s paging plant in Boynton Beach, Florida, is part of Motorola’s Messaging Information and Media Sector business unit. This business unit designs, manufactures, and distributes a variety of messaging products, including pagers and paging systems, wireless and wireline data communication products, handwriting recognition software, and infrastructure equipment, systems, and services. The plant in Boynton Beach handles the strategic and tactical dealings with suppliers. Motorola developed a successful six-sigma manufacturing process and was the first Malcolm Baldrige award winner in 1988. Motorola’s resulting successes have inspired many other U.S. corporations to use it as a quality benchmark. As a corporation, Motorola’s net sales in 1995 were $27 billion. Texas Instruments Lubbock Metal-Oxide Semiconductor - Lubbock, Texas The following processes are used to ensure a product’s profitability and producibility before it enters production: — Profit and quality team is involved throughout the process. — Manufacturing and design departments work together. — Manufacturing staff determine critical parameters to achieve consistent quality while engineers use statistical process control to discover whether a parameter remains in control if a change in the process or product occurred. — Engineers consult with manufacturing staff during the design phase in an effort to prevent critical failures from occurring in production. Process controls have helped Texas Instruments — eliminate end-item inspection that no longer adds value and reduce inspectors from 16 to 12, — reduce manufacturer’s defects by over 68 percent, and — increase productivity by over 120 percent. Varian Nuclear Magnetic Resonance Instruments - Palo Alto, California Varian’s NMRI business unit’s approach is as follows: — Baldrige Award criteria and ISO-9000 standards are used as the foundation of its quality system. — Cross-functional teams integrate design, manufacturing, and field service issues during product development. — Statistical process control is used to develop statistically significant boundaries within which the product remains functional. — Software has been instrumental in improving the productivity and accuracy of the testing process. — NMRI has sought to eliminate inspections by relying on self-testing and continuous improvement. Process control initiatives have allowed NMRI to — eliminate its quality assurance department, — empower on-line operators with in-process control, — decrease the number of inspectors from 26 to 2, and — increase productivity by 97 percent over 6 years. Texas Instruments Defense Systems and Electronics - Dallas, Texas Texas Instruments Defense Systems and Electronics won the Malcolm Baldrige Award in 1992 with the following approach: — Design for manufacturing was used in defense programs to reduce part counts, increase opportunities for automation, and simplify assembly. — Multifunctional teams formed during the concept phase of a product’s life cycle remain together throughout the production program. — Production process is controlled by statistical process control, which identifies the critical parameters. — Continuous flow manufacturing is used to identify bottlenecks and nonvalue-added costs in production lines. Since 1991, Defense Systems and Electronics has — reduced defects by almost 70 percent, — reduced scrap rates by 50 percent, — reduced inspectors by 50 percent, and — increased productivity by about 30 percent. (continued) Varian Oncology Systems - Palo Alto, California Varian’s Oncology Systems business unit used the Malcolm Baldrige Award criteria as a foundation for its quality strategy. — Five-phased product development process uses cross-functional teams. — Program reviews check progress and readiness, and senior management reviews the project every 6 weeks. — Capability of key manufacturing processes is measured before production begins. — All production processes are mapped to determine nonvalue-added steps. Using these techniques, Varian was able to — reduce final test hours of each high-energy linear accelerator from 700 to 200; — increase the maximum production of these items from 110 to 200; — reduce significantly, the defects at the end of assembly; and — reduce inspectors by 94 percent. Cherry Electrical Products - Waukegan, Illinois Cherry Electrical Products experienced decreased labor costs and increased quality in its stamping department using statistical process control. For example: — Using statistical data, the company diagnosed a problem with the start-up of the machine, compensated for it, and reduced the number of inspectors monitoring the process from four to two. Furthermore, roving inspectors in the department and final inspection on the end item were eliminated, and scrap and waste were significantly reduced. — Cherry reduced the number of operators from 24 to 1 on its spring generator machines while increasing quality. Diagnostic, analytical problem-solving using statistical process control showed the company’s defects and scrap problems were not a result of machine variability as previously thought, but due to variability introduced by numerous operators. Because of statistical process control, labor was reduced, quality was increased, and waste was cut. — Because of product consistency resulting from process controls, one department was able to reduce its headcount from 20 to 12. Overall, Cherry, by using statistical process control, has brought its quality processes under control and decreased inspectors by 78 percent, while reducing defects by 90 percent. Varian Chromatography Systems - Walnut Creek, California Varian Chromatography used “Andon,” a Japanese quality process control, to improve its processes. Using Andon, each operator is empowered to signal a problem to the rest of the manufacturing team. Each station is equipped with red, yellow, and green lights, with a red light empowering the employee to stop production. — Operators record any problems found on the production line on an opportunity board. — Weekly meetings are held with operators to resolve problems in cross-functional teams. — Varian cross trains its employees to increase their skill base so that they are able to understand the entire process. Varian reduced waste and increased savings through these process control techniques. For example, it — reduced quality test time by 98 percent, from a 96-hour test to a 2-hour test; — reduced defects by 34 percent; — decreased the number of inspectors by over 75 percent; and — improved productivity by 39 percent since 1992. Varian Chromatography Systems - Walnut Creek, California Varian Chromatography Systems’ supplier quality practices are to — concentrate its business with a few, best-qualified suppliers; — conduct just-in-time manufacturing with them, as part of its Value Managed Relationships program; — perform zero-receiving inspection and not stock the suppliers’ parts; instead, components from suppliers are coordinated daily, and the supplier’s pretested, unboxed parts are delivered directly from the truck to Varian’s manufacturing assembly line; — provide suppliers a total year goal and give a rolling 12-month forecast instead of the company preordering its material in a traditional “push” system; — give copies of its master production schedule to suppliers; and — implement a certification process whereby all new components must pass first article inspection and three defect-free lot inspections prior to approval. In addition, approved suppliers are subject to annual review and quarterly feedback reports. Through its supplier quality program, — inventory has been reduced by 68 percent; — supplier defects have decreased 75 percent; — suppliers were reduced by 78 percent, down from over 2,000 to approximately 440; and — suppliers are kept for “life” as long as they provide the technological capabilities, and some have been supplying the company for as long as 30 years, with many over 10 years. Varian Oncology Systems - Palo Alto, California Varian Oncology Systems’ supplier quality practices are to — deliver 100-percent quality parts directly to the factory floor; — eliminate receiving inspections for certified suppliers and conduct periodic on-site reviews; — assist any certified supplier experiencing quality or delivery problems with supplier corrective action teams; — provide annual performance reports, periodic report cards, and performance awards as feedback to its suppliers; — require certified suppliers to maintain competitive prices, high quality levels, and excellent delivery performance to retain certification; and — provide certified suppliers with long-term contracts, access to Varian training in process controls, and improved schedule visibility. Oncology Systems conducts business with 45 certified suppliers and estimates that these relationships have saved $3.3 million over the past 3 years due to eliminated inspections, reduced planning and purchasing requirements, and reduced defects. It regularly invites suppliers to join continuous process improvement teams that are focused on process control techniques. Since 1990, supplier defects have been reduced by 73 percent, and receiving inspections have been significantly reduced. Varian Nuclear Magnetic Resonance Instruments - Palo Alto, California Varian Nuclear Magnetic Resonance Instruments has increased both savings and supplier quality by — switching from an inspection-oriented environment to one of process control; — training its workers in statistical process control factorywide and using it to internally track the performance of its suppliers; — reducing its supplier base by over 60 percent by weeding out those with poor performance records and those it seldom used; and — conducting certification programs where suppliers are required to pass five to six lot inspections. Varian-approved suppliers’parts completely bypass receiving inspection, but if they do not meet quality standards as a product is assembled on the production line, they are immediately rejected. Due to the supplier program, supplier defect rates have decreased by 96 percent, and inspections have decreased by 76 percent. In fact, the company is so confident in the supplier’s quality for one critical component that comprises 80 percent of the total cost of supplier material, that Varian does not perform final assembly and test until after the product is delivered to the customer. Thus, Varian and its customer view the results of final assembly and test simultaneously. (continued) Motorola Paging Products Group - Boynton Beach, Florida Motorola’s Paging Products Group supplier quality practices are to — conduct certification programs requiring that each supplier undergo an on-site evaluation and submit sample parts for approval, — conduct a quality system review at the end of the first year to determine the supplier’s status, — eliminate receiving inspections after suppliers have passed inspection defect-free on three lots of material, — require preferred suppliers to apply for the Baldrige Award, — emphasize long-term relationships with suppliers, — train all of its suppliers in total quality techniques, — share its schedule forecasts with suppliers, and — conduct early supplier involvement programs on new products. As a result of its supplier quality assurance program, Motorola has — eliminated all inspection of incoming material from all but new suppliers; — reduced suppliers by 85 percent, from 800 to 118; and — decreased supplier defects by over 90 percent over a 7-year period. Texas Instruments Lubbock Metal-Oxide Semiconductor - Lubbock, Texas Texas Instruments Metal-Oxide Semiconductor ensures the quality of supplier products by — relying upon third-party certification to periodically review its suppliers; — using a Sematech database, allowing TI to choose from among the best suppliers; Sematech is a semiconductor consortium of 10 large companies that periodically conducts quality system audits of suppliers and retains past performance information so that its members can increase their effectiveness in selecting and retaining the few best ones; and — depending upon the Texas Instruments purchasing center located in Dallas, which screens and approves suppliers and parts as it does for all wafer fabrication facilities. Because of this supplier process control program, Texas Instruments has experienced virtually zero defects from its suppliers. It is also able to bypass inspecting parts from high quality suppliers whose processes are in control. These parts are directly used by Texas Instruments-Lubbock in its products. Cherry Electrical Products - Waukegan, Illinois Cherry Electronics uses the following supplier quality practices: — Most of Cherry’s 400 suppliers are “approved.” — The approved suppliers follow Cherry’s Supplier Quality Assurance Manual. — New suppliers and/or new items are audited, via a layout inspection, and statistical process control analysis is performed. — Cherry matches its findings against those of the suppliers’ and if a match results, the supplier’s item becomes “approved.” — The relationship with approved suppliers is based on trust and self-monitoring by the suppliers, with statistical process control data available from suppliers upon request. As a result of its supplier quality practices, Cherry has — reduced the number of suppliers by 50 percent, from 800 to 400 suppliers; — reduced the number of defects from suppliers by 90 percent; and — reexamined and eliminated unnecessary specifications. John Deere Horicon Works - Horicon, Wisconsin John Deere’s Horicon Works supplier practices are the following: — Suppliers of critical parts must have established +/-3 sigma quality. (A sigma unit is a measure of scale that can be used for quality data. Three sigma quality (+/-3) is equal to three standard deviations away from an average, and the area covered by three sigma is 99 to 100 percent of the data. Under three-sigma quality, a supplier product’s values will lie within a known, predictable range 99 to 100 percent of the time.) — Deere measures the parts for conformance and then requires process variability reports for the key characteristics of the part from the supplier on a periodic basis. — Deere looks for strategic partners for its critical parts who will share management philosophies, be industry leaders, and require very little communication or reviews. As a result, Deere has reduced its supplier base 81 percent, from over 800 to 151, and has qualified 40 of them. It has been able to eliminate receiving inspections as a result, a savings of 25 positions. To be signed. Signed on April 24, 1995. Develop pilot program characteristics and candidates. To be implemented November 1995. No status update. Identify changes required to FAR parts 46 and 52 as well as DOD FAR Supplement (DFARS) part 246. DFARS changes - December 1995 FAR changes - July 1995. DFARS case 246 approved; inputs needed to FAR part 46. -Identify key elements of a quality plan. -Write data item description incorporating needed elements. -Develop revisions needed to ANSI/ASQC Q9000. Obtain data item description approval. Data item description 81449 approved January 1995. Develop government/industry training template for quality system. Plan of action for training - May 1995. Completed draft - May 1996. Develop criteria for government and industry to evaluate contractor’s basic quality management system. Complete criteria August 1995. Completed draft - May 1996. 7. Develop evaluator guidelines. Determine if common evaluation criteria exists among audit community. Complete guidelines March 1995. Completed draft - May 1996. Determine the most efficient way of performing the oversight function to meet the criteria for evaluating quality management systems. Develop guidelines for oversight - September 1995. Completed draft - May 1996. Completed draft - May 1996. Develop a handbook that contains criteria or guidelines that can be used to encourage the use of advanced quality concepts. Distribute handbook for concurrence - September 1995. Recognize, improve, and promote the use of value-adding advanced quality practices. Final report - September 1995. Completed draft - May 1996. Establish an advanced quality practices clearinghouse to help promote quality management systems in government and industry. Implement clearinghouse and publicize - October 1995. Final report prepared February 1996. (continued) Draft governmentwide agreement to mutually recognize a contractor’s quality system based on defined baseline requirements. Memorandum ready October 1995. Final draft contingent upon completion of all other tasks. Establish integrated government audit system whereby audit results are shared by all participating government agencies. Not specified. Ongoing. Courses to start March 1996. Ongoing. Develop four courses based on the approved ISO 9000 training template. Obtain review and approval of course content and context. Review to start December 1995. Ongoing. process were not critical to the product’s quality. The elimination of these six measurements saved a total of 36 labor hours for each product manufactured. Because of this application of statistical process control, rework decreased from 3 to 5 percent to 0.5 percent, the plant experienced a 50-percent decrease in the product’s variability, and a 50-percent decrease in cycle time. Because of this decreased cycle time, the resulting throughput was increased, the plant became more efficient, and Texas Instruments avoided purchasing an extra scanning electron microscope costing $800,000, as well as saved the cost of a technician that would have been needed to operate it. Weapons Acquisition: Low-Rate Initial Production Used to Buy Weapon Systems Prematurely (GAO/NSIAD-95-18, Nov. 21, 1994). Military Airlift: The C-17 Program Update and Proposed Settlement (GAO/T-NSIAD-94-166, Apr. 19, 1994). Weapons Acquisition: A Rare Opportunity for Lasting Change (GAO/NSIAD-93-15, Dec. 1992). Tactical Missile Acquisitions: Understated Technical Risks Leading to Cost and Schedule Overruns (GAO/NSIAD-91-280, Sept. 17, 1991). Why Some Weapon Systems Encounter Production Problems While Others Do Not: Six Case Studies (GAO/NSIAD-85-34, May 24, 1985). Strategic Bombers: B-2 Program Status and Current Issues (GAO/NSIAD-90-120, Feb. 22, 1990). Effectiveness of U.S. Forces Can Be Increased Through Improved Weapon System Design (GAO/PSAD-81-17, Jan. 29, 1981). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of Defense's (DOD) quality assurance practices, focusing on: (1) the problems that DOD has had in improving such practices; (2) private-sector practices that could be beneficial to DOD; and (3) DOD efforts to improve its quality assurance activities. GAO found that: (1) DOD spends more than $1.5 billion annually to support its quality assurance activities; (2) manufacturing and quality problems have contributed to cost and schedule overruns on DOD weapons systems programs; (3) DOD acquisition programs have experienced quality problems during the production phase due to incomplete weapon designs; (4) DOD is relying on unstable designs and inspections to rework the defects in its B-2 bomber and C-17 Airlifter programs; (5) a number of successful commercial manufacturers are improving the quality of their products, while reducing their oversight and inspection costs; (6) commercial manufacturers have broadened their definition of quality assurance, by changing it from a postmanufacturing step done at the end of each process to being part of the process itself; (7) DOD began in the early 1990s to implement a method of moving away from inspection-oriented quality assurance practices and toward process control; (8) DOD is developing a plan to reduce the costs of redundant quality assurance processes; (9) the plan will allow DOD contractors to use a single quality management system based on process controls for all DOD contracts; and (10) DOD could enhance its quality management system by encouraging defense contractors to use more advanced commercial techniques and quality assurance practices.
The United States is the world’s largest trading nation, with over $1 trillion in trade in 1993. Nearly 50 percent of this trade, by value, was transported by sea. Throughout much of this century, however, the U.S. merchant marine industry has struggled to compete effectively in the international market. The U.S. ocean-going fleet is the ninth largest fleet in the world by deadweight tonnage, comprising about 3 percent of the world fleet’s tonnage. The U.S. fleet, as of September 1993, comprised 371 privately owned vessels. U.S.-flag vessels are not competitive in international trade—cargo carried between U.S. and foreign ports or between foreign ports—because they generally have higher operating and capital costs than foreign-flag vessels. (Foreign-flag vessels are restricted from carrying cargo between domestic ports.) According to Maritime Administration (MARAD) officials, crew costs account for the largest portion of the difference between the operating costs of U.S.- and foreign-flag vessels. U.S. crews receive higher wages and other benefits, and U.S.-flag vessels have higher manning level requirements than comparable foreign-flag vessels. Also, because U.S. shipyards generally charge more to build and maintain vessels than foreign shipyards, U.S.-flag vessels have higher capital and maintenance costs. To help the U.S. merchant marine industry compete, the Congress has enacted a number of laws supporting the industry, including cargo preference laws, which require that most government-owned or -financed cargo that is shipped internationally be carried aboard U.S.-flag vessels. This cargo is known as preference cargo. Cargo preference laws guarantee a minimum amount of business for the U.S. merchant fleet; this additional business, in turn, promotes the remainder of the maritime industry because U.S.-flag vessels are required by law to be crewed by U.S. mariners, are generally required to be built in U.S. shipyards, and are encouraged to be maintained and repaired in U.S. shipyards. However, because U.S.-flag vessels often charge higher rates to transport cargo than foreign-flag vessels, cargo preference laws increase the government’s transportation costs. Cargo preference laws have long been controversial both from an economic and a political point of view. The proponents of cargo preference laws point to this nation’s economic dependence on waterborne transportation for international trade and the role that merchant vessels play in transporting military supplies during wartime. They maintain that a strong merchant marine industry is vital to the nation’s economic and military security and that cargo preference laws help to counter the subsidies that many foreign countries provide to their merchant fleets. The opponents of cargo preference laws, on the other hand, argue that cargo preference laws cost the government money, have not been successful in maintaining a strong merchant marine industry, and do not always support the most militarily useful vessels. They also point out that the additional transportation costs hamper federal efforts to provide humanitarian aid overseas because the available funds are diverted to the transportation of that aid, instead of being used to purchase farm commodities and other types of aid. Recently, MARAD reported that U.S. citizens, corporations, or the federal government owns about 893 ocean-going vessels weighing 1,000 gross tons or more. (See fig. 1.1.) Of the 893 vessels, 586 were U.S.-flagged, and the remaining 307 were owned by U.S. citizens or corporations but were foreign-flagged. Of the 586 U.S.-flag vessels, MARAD reported that 371 were privately owned and 215 were owned by the federal government. Most of the privately owned vessels are actively engaged in commerce, while most of the federally owned vessels are in long-term storage—held in MARAD’s custody in case they are needed during a national emergency. Foreign-flagged vessels owned by U.S. citizens or corporations, like all foreign-flagged vessels, are subject to the laws of the foreign country whose flag they fly, not the laws to which U.S.-flag vessels are subject. Sometimes, the laws of foreign countries include significant obstacles to the requisition of the vessels by the United States during national emergencies. Of the 307 U.S.-owned but foreign-flagged vessels, 219 are flagged in countries that do not have policies disallowing the U.S. government from requisitioning these vessels during national emergencies. The countries are the Bahamas, Honduras, Liberia, the Marshall Islands, and Panama. U.S.-flag vessels registered in these countries are said to be under “Effective U.S. Control” (EUSC). But according to MARAD officials, there is no guarantee, should requisitioning be necessary, that these nations will actually permit their vessels to be taken by the United States. And even if the United States is able to take foreign-flag EUSC vessels, the foreign crews cannot be compelled to operate the vessels, and the operators are not obligated to return the vessels to the United States. In addition, EUSC vessels are not subject to any of the other laws or regulations that apply specifically to U.S.-flag vessels. As of January 1993, the privately owned U.S.-flag vessels constituted the ninth largest fleet in the world by deadweight tons (DWT). These vessels have a carrying capacity of 18.8 million DWTs, which comprises about 3 percent of the world fleet’s tonnage. Of the 371 privately owned U.S.-flag vessels, MARAD reported that 23 were inactive, 49 were chartered by the Department of Defense (DOD), 134 were engaged in domestic trade, and 165 were engaged in international trade. Since all preference cargo is international, cargo preference laws have the most direct effect on the portion of the U.S. fleet engaged in international trade. (See app. I for a list of vessel operators who carried preference cargo in 1993.) U.S.-flag vessels generally charge more to carry cargo than foreign-flag vessels because they have higher operating and capital costs. MARAD officials identified several general reasons for this: Most of the higher costs are crew costs. U.S. crews receive higher wages and other benefits, and U.S.-flag vessels have higher manning-level requirements than comparable foreign-flag vessels. Approximately half of the U.S. fleet is old and/or steam-powered. Of the 165 vessels engaged in international trade, about 50 percent are within 5 years of the end of their statutory life expectancy, which depending on the type of vessel is 20 or 25 years. In addition, steam-powered vessels are less efficient and use more fuel than the newer diesel-powered vessels that comprise virtually all of the foreign-flag vessels engaged in international trade with the United States. U.S. shipyards generally charge more to build and maintain vessels than foreign shipyards. As a result, U.S.-flag vessels generally have higher capital and maintenance costs. Although not all U.S.-flag ships were built in U.S. shipyards, the Tariff Act of 1930 (P.L. 361) imposes a 50-percent tariff on the cost of maintenance and nonemergency repairs performed on U.S.-flag vessels in foreign shipyards. Since the passage of the first cargo preference law—the Cargo Preference Act of 1904 (P.L. 198)—the Congress has, in response to general downturns in the maritime industry, repeatedly reaffirmed its intent to support the U.S. merchant marine industry. Following the 1904 act, several major cargo preference laws were passed that guarantee cargo to U.S.-flag vessels; this guarantee was intended to promote the merchant marine industry. The 1904 act generally requires that only U.S.-flag vessels be used to transport supplies for the U.S. armed forces by sea. However, if the President finds that the rate charged by those vessels is excessive or otherwise unreasonable, contracts for transportation may be made as otherwise provided by law. In 1934, the Congress passed Public Resolution 17, which requires that all cargo financed by the Export-Import Bank be shipped on U.S.-flag vessels, unless granted a waiver. In 1936, the Congress passed the Merchant Marine Act of 1936 (P.L. 835), which required that a “substantial portion” of internationally shipped cargo be transported on U.S.-flag vessels. In 1954, the Congress passed the Cargo Preference Act of 1954, which amended the Merchant Marine Act of 1936 to require that at least 50 percent of any government-controlled cargo shipped by sea be carried on privately owned U.S.-flag vessels. However, the 50-percent provision can be waived if U.S.-flag vessels are not available at “fair and reasonable” rates and in certain emergency situations. And finally, the Congress passed the Food Security Act of 1985 (P.L. 99-198), which increased from 50 to 75 percent the percentage of food aid cargo that the U.S. Department of Agriculture (USDA) and the Agency for International Development (AID) must ship on U.S.-flag vessels (however, the act exempted other USDA cargo). Besides cargo preference laws, a number of other programs were designed to promote the U.S. merchant marine industry. To help offset some of the higher operating and capital costs faced by U.S.-flag carriers engaged in international trade, the Merchant Marine Act of 1936 authorizes MARAD to pay operating-differential subsidies (ODS) and construction-differential subsidies (CDS) to operators of vessels in international trade. Additionally, the Jones Act restricts foreign-built U.S.-flag vessels from engaging in domestic trade. ODS payments support the portion of the U.S. fleet engaged in international trade by offsetting the higher costs to operate U.S.-flag vessels. ODS recipients normally enter into 20-year contracts with MARAD, during which time they may not engage in domestic trade or reflag the vessel to another country, and their subsidy will be reduced if they carry cargo between U.S. ports as part of a voyage involving foreign ports. In fiscal year 1993, the federal government provided 75 vessels with a total of $215.5 million in ODS payments. No new ODS contracts have been awarded since 1981. CDS are payments based on the difference in cost to construct vessels in U.S shipyards and foreign shipyards. Vessels built with CDS payments may not reflag for 25 years (20 years for tankers), may not enter into domestic trade (voyages with stops exclusively at U.S. ports), and must pay back a portion of the CDS if they carry cargo between U.S. ports as part of a voyage involving foreign ports. Although the program has not been eliminated, the last vessel built under this program was contracted for in 1981 and delivered in 1984. Currently, 79 vessels are under CDS restrictions. All vessels in international trade provide either charter or liner services.Charter-service vessels do not have regularly scheduled sailings, fixed routes, or fixed freight rates. They typically carry a shipload worth of cargo for only one or a few customers at the same time. Conversely, liner-service vessels have regularly scheduled sailings on fixed routes at fixed freight rates. They typically carry small amounts of cargo for many customers at one time and will sail even if not completely full. Vessels providing charter service cannot receive ODS payments while carrying preference cargo; vessels providing liner service can. Freight rates on liner-service vessels typically are higher than those on charter-service vessels. In addition, most liner-service vessels, whether U.S.-flagged or foreign-flagged, belong to shipping conferences. Members of shipping conferences agree to charge similar prices for similar services in order to minimize price competition. However, U.S. law contains a number of provisions that mitigate this effect. In our 1994 report, Cargo Preference Requirements: Objectives Not Significantly Advanced When Used in U.S. Food Aid Programs (GAO/GGD-94-215, Sept. 29, 1994), we reported that the application of cargo preference to food aid programs does not significantly contribute to maintaining a naval auxiliary in time of war or national emergency or to the carriage of domestic and foreign commerce. We also reported that cargo preference laws adversely affect the operation of U.S. food aid programs. In our 1990 report, Cargo Preference Requirements: Their Impact on U.S. Food Aid Programs and the U.S. Merchant Marine (GAO/NSIAD-90-174, June 19, 1990), we found that the differential between the food aid shipping costs of U.S.- and foreign-flag vessels decreased by 50 percent per ton between 1981 and 1989. We also found that during this same time period, despite an increase in the amount of government-owned or -financed cargo shipped on U.S.-flag vessels, the number of U.S.-flag vessels decreased. Additionally, in 1984 we issued Economic Effects of Cargo Preference Laws (GAO/OCE-84-3, Jan. 31, 1984). In that report, we estimated that in 1980, between 21 and 33 additional ships and from 1,400 to 2,200 shipboard workers were employed because of cargo preference laws and that those laws cost the federal government between $71 million and $79 million (between $123.1 million and $136.9 million, respectively, in constant 1993 dollars). However, that report did not include DOD in its analysis because DOD’s policy was (and is) to ship on U.S.-flag vessels even if cargo preference laws were eliminated. On April 29, 1993, Senators Hank Brown, John C. Danforth, Charles E. Grassley, Don Nickles, and Malcolm Wallop asked us to provide information on the cargo preference programs and related information on the U.S. merchant marine industry. On the basis of subsequent discussions with their staff, we agreed to provide information on the cost to the federal government of cargo preference laws and their effects on the U.S. merchant marine industry along with certain additional information. This report does not make conclusions regarding the desirability of cargo preference laws or recommendations for changes that could be made to those laws. Additional details on our scope and methodology are contained in appendix VIII. We performed our review from June 1993 through September 1994 in accordance with generally accepted government auditing standards. Because the cost to transport cargo on U.S.-flag vessels is generally higher than it is on foreign-flag vessels, cargo preference laws add directly to a federal agency’s transportation costs. Although cargo preference laws apply to most federal agencies, four agencies—DOD, USDA, AID, and the Department of Energy (DOE)—were responsible for more than 99 percent of the 100 million tons of government cargo shipped internationally during calendar years 1988 through 1992. The estimated additional costs for transporting preference cargo for these agencies, including DOD’s costs associated with the Persian Gulf War, totaled, on average, about $710 million per year in fiscal years 1989 through 1993. (The average is about $578 million when the costs associated with the Persian Gulf War are excluded.) The $710 million estimate is about 50 percent of the $1.4 billion spent annually by the federal agencies to ship preference cargo on U.S.-flag vessels. DOD maintains that its policy is to ship a substantial portion of its cargo on U.S.-flag vessels and that it would continue this policy in the absence of cargo preference laws. However, because DOD ships about 50 percent of the cargo subject to the preference laws, we have included estimates of its additional transportation costs in order to give a more complete picture of the cost to the federal government of reserving cargo for U.S.-flag vessels even though DOD’s portion might continue without cargo preference laws. DOD’s cost estimate is based on an approximation of the total cost to ship cargo on U.S.-flag vessels and on judgmentally selected data on the cost to ship cargo on foreign-flag vessels. Because foreign-flag carriers do not consistently bid for DOD cargo, the Department cannot ascertain what rates foreign-flag vessels would have actually charged to carry its cargo. As a result, DOD’s cost estimate is based on DOD officials’ expertise and judgment—DOD does not keep complete records that show how it derived its estimates. We did not independently verify these figures. Table 2.1 shows each agency’s estimated cost of reserving preference cargo for U.S.-flag vessels in fiscal years 1989 through 1993. MARAD is included because it must, by law, pay a portion of USDA’s food aid transportation costs. DOD ships more preference cargo than any other federal agency—approximately 50 percent of the total in 1988 through 1992. Almost all of the cargo that DOD ships is categorized as “troop support.” Troop support includes spare parts, food stuffs, ammunition, commissary items, and privately owned vehicles. In 1988 through 1992, DOD shipped about 51 million metric tons of cargo. Of this amount, 45 million tons (88 percent) was shipped on U.S.-flag vessels. DOD estimates that its additional transportation costs to ship preference cargo on U.S.-flag vessels in fiscal years 1989 through 1993 was $2.4 billion, or an average of $482 million per year for the last 5 years. The average is about $350 million per year when the costs associated with the Persian Gulf War are excluded. USDA and AID are responsible for food assistance programs under which U.S. agricultural commodities are donated or sold abroad for humanitarian and developmental purposes. The food assistance is provided primarily through five programs: titles I, II, and III of the Agricultural Trade Development and Assistance Act of 1954 (P.L. 480, commonly called, collectively, the P.L. 480 program); section 416 of the Agricultural Act of 1949 (P.L. 439); and the Food for Progress Act of 1985 (P.L. 99-198). Although AID administers some of the food aid programs, the transportation costs of these programs that are borne by the federal government, and hence the additional costs to ship on U.S.-flag vessels, are paid for through USDA and MARAD appropriations. The title I program provides financing to developing countries to purchase U.S. agricultural commodities. It is administered by USDA. The title II program donates packaged, processed, and bulk commodities to the least-developed countries. Commodities are used directly to feed refugees and children as well as for other authorized purposes. It is administered by AID. The title III program (known as the Food for Development Program) provides donations to governments to support long-term growth in agriculture and related activities in the least-developed countries. It is administered by AID. The section 416 program donates bulk grain and other surplus agricultural commodities to the least-developed countries. It is administered by USDA. The Food for Progress program provides agricultural commodities to developing countries that have made commitments to expand free enterprise in their agricultural economies. It is administered by USDA. In 1988 through 1992, USDA and AID shipped 36 million metric tons of food aid. Of the total amount, 27.5 million tons (approximately 77 percent) was shipped on U.S.-flag vessels. These agencies, as well as MARAD, which must pay a portion of the transportation costs, estimate that the additional transportation costs to ship preference cargo on U.S.-flag vessels in fiscal years 1989 through 1993 was about $1 billion, or an average of $200 million per year for the last 5 years. Besides food aid, AID is also responsible for providing aid such as generators, automobiles, corrugated metal, and lumber to developing countries. In 1988 through 1992, this cargo totaled about 5 million metric tons. Of this amount, 2.6 million metric tons (about 52 percent) was shipped on U.S.-flag vessels. On the basis of the cost to ship its cargo on U.S.-flag and foreign-flag vessels, AID estimates that its additional transportation costs to ship preference cargo on U.S.-flag vessels for calendar years 1989 through 1993 was $116 million dollars, or an average of $23 million per year for the last 5 years. The Strategic Petroleum Reserve is a program administered by DOE to store 750 million barrels of crude oil in salt domes along the U.S. Gulf Coast to guard against disruptions in international oil supplies. In 1988 through 1992, DOE reported that it shipped approximately 7.6 million metric tons of oil. Of this amount, 3.7 million tons (49 percent) was shipped on U.S.-flag vessels. On the basis of data that DOE provided us on the amount and cost of oil that it shipped on U.S- and foreign-flag vessels, we estimate that the Department’s additional transportation costs to ship preference cargo on U.S.-flag vessels for fiscal years 1989 through 1993 was approximately $9 million dollars, or an average of less than $2 million per year for the last 5 years. Cargo preference laws add directly to a federal agency’s transportation costs. In fiscal years 1989 through 1993, the five agencies responsible for the transportation costs of most of the government’s international cargo paid an estimated additional $3.5 billion in transportation costs to ship cargo on U.S.-flag vessels. However, DOD estimates that $659 million of this cost was related to the Persian Gulf War. The $3.5 billion estimate represents about 51 percent of the $6.9 billion spent to ship preference cargo on U.S.-flag vessels. By guaranteeing business for U.S.-flag vessels, which (1) are required to be crewed by U.S. mariners, (2) are generally required to be built in U.S. shipyards, and (3) are encouraged to be maintained in U.S. shipyards, cargo preference laws promote the U.S. maritime industry. However, their effect on the U.S. merchant marine industry is mixed. Although cargo preference laws have not had the effect of maintaining the share of international oceanborne cargo carried by U.S.-flag vessels, the U.S. fleet is dependent on preference cargo for a significant portion of the international cargo that it carries. Historically, cargo preference laws have not prevented a decline in the share of oceanborne cargo carried by U.S.-flag vessels. Throughout most of this century, with the exception of the periods immediately following World Wars I and II, the U.S. fleet has comprised a small percentage of the world fleet and carried a small percentage of the United States’ international cargo. Additionally, the amount of cargo reserved for U.S.-flag vessels has averaged only 5 percent of international cargo since 1961. As shown in figure III.1a (see app. III), since 1906, the U.S. fleet has experienced significant growth only during the World Wars. In both instances, this growth was followed by extended periods of decline. The size of the U.S. fleet increased from about 6 percent of the world fleet’s size, by gross tonnage, to 23 percent during and immediately following World War I but steadily declined to about 13 percent just prior to World War II. The relative size of the U.S. fleet increased again during World War II—to about 38 percent of the world fleet’s size in 1948, shortly after the war’s end—but declined steadily thereafter to about 3.9 percent of the 397 million gross tons in the world fleet in 1992. The relative decline in the U.S. fleet since 1948 can be attributed in large part to the 460-percent increase in the size of the world fleet, even though the size of the U.S. fleet decreased about 42 percent during this time. The decline in the relative size of the U.S. fleet also corresponds to the decline in the percentage of international trade carried on U.S. ships. As figure 3.1a shows, the percentage of international trade carried on U.S.-flag ships was substantial following World Wars I and II—49 percent and 58 percent, respectively—but declined immediately thereafter. In 1992, U.S.-flag vessels carried approximately 4 percent of the nation’s oceanborne international trade. Additionally, figure 3.1b shows that since World War II, there has been a dramatic increase in the amount of international oceanborne cargo. Most of the increase has been in privately owned cargo, which is not subject to cargo preference laws and is often shipped on less expensive foreign-flag vessels. The amount of cargo reserved for U.S.-flag vessels is a very small portion of total international cargo and therefore has not contributed substantially to the total share of cargo carried by U.S.-flag vessels. As a percentage of international cargo, preference cargo carried on U.S.-flag vessels ranged from 11 percent in 1962-63 to less than 2 percent in 1992 and averaged 5 percent during this time period. While cargo preference laws do not appear to have significantly affected the share of international oceanborne freight carried on U.S.-flag vessels, we estimate that in the absence of preference cargo, a significant portion of the U.S. fleet would reflag or cease operating. This would significantly affect the number of shipboard jobs on U.S.-flag vessels engaged in international trade. However, the impact on shipyards would be minimal. The 165 vessels active in international trade on September 30, 1993, have an aggregate carrying capacity of 7.3 million DWTs. We estimate that in the absence of preference cargo, vessels with a carrying capacity of between 4.4 million and 5 million DWTs might leave the active U.S.fleet. Table 3.1 summarizes our findings. Some of the vessels leaving the U.S. fleet will likely be vessels that have traditionally operated in the domestic trade but are displaced by vessels from the international trade. Vessels that leave the U.S. fleet will most likely either reflag to achieve cost savings or cease operating (either being scrapped or laid up) if they are not competitive. Many of the vessels that reflag may continue to be owned by a U.S. parent company and may reflag to one of the five countries that allow vessels owned by U.S. citizens to be under Effective U.S. Control. Our analysis of the reduction of tonnage in the U.S. fleet that would occur if cargo preference laws and policy were eliminated is based on the ability of the vessels to compete in the international trade and, if eligible, to compete in domestic trade. We included in our analysis an examination of other factors, such as international political considerations and the amount of preference cargo that vessels have carried. Additionally, we made the assumption that ODS payments alone, in the absence of preference cargo, are generally not sufficient to induce a carrier to remain U.S.-flagged. We conducted our analysis in consultation with MARAD officials and confirmed our estimate about which vessels would leave the U.S. fleet with information obtained from 18 vessel operators that controlled 112 of the 165 vessels engaged in international trade. Because of the complexity of the issues, we did not include in our analysis several considerations that might have caused us to overestimate or underestimate the number of U.S.-flag vessels that would leave the fleet. The considerations that might have caused us to overestimate the effect on the U.S. fleet include the following factors: (1) U.S.-flag vessels need the permission of MARAD to change the nationality of their registry and (2) some vessel owners might keep their vessels under the U.S. flag for nationalistic or personal reasons. Additionally, some vessels, although not economically viable, may be militarily useful, prompting the U.S. government to purchase them instead of letting them be scrapped. This, however, would not affect our estimate of the number of vessels that would leave the privately owned U.S. fleet. However, we also did not include in our analysis the number of vessels likely to leave the fleet regardless of the status of cargo preference. The fleet of privately owned, ocean-going vessels has declined 16 percent (by DWTs) since 1988. Additionally, nearly one-quarter of the 165 vessels engaged in international trade have already exceeded their statutory life expectancy, and another quarter will do so within 5 years. The statutory life of a vessel is 25 years, except for tankers, whose expectancy is 20 years. Four general types of vessels—general cargo ships, bulk carriers, tankers, and intermodal ships—would be affected if cargo preference laws and policy were eliminated. General cargo ships are traditional multipurpose freighters that carry nonuniform items packaged as single parcels or assembled together on pallet boards. Cargo is typically lifted on or off the general cargo vessels using wire or rope slings and a crane. Bulk carriers are ships that carry homogenous, unpacked cargo, usually in shipload lots. If they are designed to carry dry bulk commodities such as grain and ore, they are classified as bulk carriers. If they are designed to carry liquid commodities such as oil and petroleum products, they are classified as tankers. Some tankers are specially designed to carry liquified natural gas (LNG) and are called LNG tankers. Intermodal ships include container ships and roll-on/roll-off ships known as RO/ROs. Container ships are designed to carry cargo in standard-size preloaded containers that permit rapid loading and unloading and efficient transportation of cargo to and from the port area. RO/ROs are designed to permit trucks, trailers, and other vehicles carrying cargo to drive on and off. MARAD reported that 18 general cargo vessels with a total of 282,000 DWTs are employed in international trade. We believe that about 81 percent of these vessels, by tonnage, would leave the U.S. fleet if cargo preference laws and policy were eliminated; most would be scrapped. The vessels that would leave are steam-powered and unable to compete effectively with the more efficiently configured intermodal carriers. Additionally, many of these vessels rely on preference cargo for a substantial portion of their business. The vessels that would remain have specialized uses and/or are of a more modern design. MARAD reported that 17 bulk carriers with a total of 842,000 DWTs are employed in international trade. We believe that between 90 and 96 percent of these vessels, by tonnage, would leave the U.S. active fleet if cargo preference laws and policy were eliminated; many would remain U.S.-owned but foreign-flagged. Most of these vessels are ineligible to enter domestic trade because they were built in foreign shipyards or built with construction-differential subsidies. Many are relatively new (built in the mid-1980s) diesel-powered vessels that could be competitive in international trade if they reduced their operating costs by reflagging. MARAD reported 45 tankers employed in international trade with a total of 3,384,000 DWTs. We believe between 38 and 45 percent of these vessels, by tonnage, would leave the U.S. active fleet if cargo preference laws and policy were eliminated. Generally, steam-powered tankers would likely be scrapped because they are not competitive in international trade and are either ineligible to enter the domestic trade or would not find sufficient business in the domestic trade to remain in operation. However, there are several notable exceptions to potential scrapping. We believe the LNG tankers would remain U.S.-flagged because they do not receive ODS subsidies and do not carry preference cargo. Also, some of the double-bottom tankers may be competitive in the domestic trade because the Oil Pollution Act of 1990 (P.L. 101-380) phases out these tankers at a slower rate than tankers with a single bottom. However, the double-bottom tankers will likely displace tankers of similar size that are already operating in the domestic trade. Additionally, we believe it likely that the diesel-powered tankers that operate without ODS subsidies and generally do not carry preference cargo would be unaffected by changes to cargo preference laws and would remain U.S.-flagged. Also, several tankers would continue operating for international political reasons having to do with the Persian Gulf War. Finally, several diesel-powered tankers are or will soon be eligible to enter the domestic trade but could be competitive internationally; consequently, we are unsure of what would happen with them. MARAD reported that 85 intermodal vessels with a total of 2,804,000 DWTs are employed in international trade. If cargo preference laws and policy were eliminated, we believe about 77 to 86 percent of the vessels, by tonnage, would leave the U.S. active fleet, many remaining U.S.-owned but foreign-flagged. We believe that many of the steam-powered intermodal vessels not already engaged in domestic trade would be scrapped because they would not be competitive in international trade and the domestic trade has no room for substantial additional tonnage, although it is uncertain whether none would enter the domestic trade. Most of the diesel-powered intermodal vessels are foreign built and would be competitive in the international trade. We believe that many of these vessels would reflag and most that remain U.S.-flagged would do so because of international political considerations. If cargo preference laws and policy were eliminated, we estimate that up to about 6,000 U.S. mariners would lose their jobs aboard U.S.-flag ships. This is approximately 71 percent of the 8,500 mariners employed on the 165 U.S.-flag vessels that MARAD reported are engaged in international trade. Our estimate of the impact on the maritime industry resulting from the elimination of cargo preference laws and policy stems from our analysis of the number of vessels we believe would have valid reasons to either reflag or leave service entirely if cargo preference laws were eliminated. On the basis of the size of the crews on the vessels we believe would leave the U.S. fleet, we estimated the number of seafaring jobs that would be lost. On the basis of the information provided to us by MARAD, the vessels associated with the 4.4 million to 5 million DWTs we believe might leave the fleet if cargo preference laws were eliminated support 2,600 to 3,000 billets (crew positions aboard a vessel). Since most mariners work aboard ship for 6 months of the year, and taking into account sick leave and other reasons for their not working full time, we estimate that 2.1 mariners are employed for every billet. We do not anticipate that the elimination of cargo preference laws and policy will significantly affect the number of vessels built in U.S. shipyards. The workload at U.S. shipyards is dominated by federal contracts. Fourteen privately owned U.S. shipyards are engaged in or seeking contracts for the construction of ocean-going or Great Lakes vessels of over 1,000 gross tons. Since 1983, 90 percent of the production workers employed by these shipyards, on average, were engaged in Navy or Coast Guard ship construction or repair. Additionally, the number and deadweight tonnage of private ocean-going merchant vessels built in U.S. shipyards has declined dramatically over the last 20 years. (See fig. 3.6.) U.S. shipyards have delivered only one privately owned ocean-going merchant vessels of 1,000 gross tons or larger in fiscal years 1988-93. We did not evaluate the effect of eliminating cargo preference laws and policy on the amount of maintenance and repair performed at U.S. shipyards. However, to the extent that U.S.-flag vessels reflag or are scrapped, less maintenance and repair work will be done at U.S shipyards because foreign-flag vessels have less incentive to use U.S. shipyards. The effect of cargo preference laws on the U.S. merchant marine industry is mixed. Cargo preference laws appear to have had little impact on maintaining the share of U.S. oceanborne cargo carried aboard U.S.-flag vessels, since most internationally shipped cargo is owned by private citizens, not subject to cargo preference laws, and thus shipped on less expensive foreign-flag vessels. Nevertheless, the U.S. fleet is dependent on preference cargo for a significant portion of the international cargo it carries. While we cannot estimate with precision the effects that eliminating cargo preference laws would have on the merchant marine industry, we believe the equivalent of up to two-thirds of the U.S.-flag vessels engaged in international trade, by tonnage, would leave the U.S. fleet. This would likely result in the elimination of about 6,000 U.S. shipboard jobs but would have a minimal impact on the U.S. shipbuilding industry. We discussed the contents of this report with the Chief, Transportation Division of the Office of Procurement, AID; cognizant officials of the Office of the Under Secretary of Defense for Acquisition and Technology, DOD; the Director, Operations and Readiness Division, Strategic Petroleum Reserve, DOE; and the Deputy Administrator for Commodity Operations, USDA. These agency officials generally agreed with the facts, respective to their agencies, contained in the report and provided only minor clarifications where appropriate. Also, we met with the Deputy Administrator for Inland Waterways and Great Lakes, MARAD, and other MARAD officials, who generally agreed with the facts respective to their agency but believed that DOD does not have the data necessary to accurately estimate its cargo preference costs. However, DOD’s cargo preference cost estimates are the official figures that DOD reported to the Office of Management and Budget or that were published in the federal budget and are the best estimates available. (See ch. 2 for how the estimates were derived.) We clarified this and other points raised by these officials, where appropriate. As requested, we did not obtain written agency comments on a draft of this report.
Pursuant to a congressional request, GAO provided information on cargo preference laws, focusing on their effect on: (1) federal transportation costs; and (2) the U.S. merchant marine industry. GAO found that: (1) cargo preference laws have increased federal agencies' transportation costs by an average of $578 million per year; (2) cargo preference laws increase agencies' transportation costs because U.S.-flag vessels generally charge more than foreign vessels to carry cargo; (3) although some agencies paid an estimated $3.5 billion in additional transportation costs to ship cargo on U.S.-flag vessels, DOD estimated that $659 million of those costs were related to the Persian Gulf War; (4) the effect of cargo preference laws on the U.S. merchant marine industry has been mixed; (5) the share of oceanborne cargo carried aboard U.S.-flag vessels has declined because most internationally shipped cargo is exempt from cargo preference laws; (6) in 1992, foreign-flag vessels carried about 96 percent of international cargo; and (7) although eliminating cargo preference laws could cause two-thirds of the U.S.-flag vessels to leave the U.S. fleet and result in the elimination of about 6,000 U.S. shipboard jobs, it would have a minimal impact on the U.S. shipbuilding industry.
As we have reported previously, EPA estimates that across the federal government 10,000 computers are disposed of each week. Once these used electronics reach the end of their original useful lives, federal agencies have several options for disposing of them. Agencies generally are to donate their used electronics to schools or other nonprofit educational institutions; exchange them with other federal, state, or local agencies; sometimes trade them with vendors to offset the costs of new equipment; sell them—generally through the GSA’s surplus property program, which sells surplus federal government equipment, including used federal electronics, at public auctions; or give them to a recycler. Federal agencies, however, are not required to track the ultimate destination of their donated or recycled used electronic products. Instead, agency officials generally consider this to be the recipient organization’s responsibility. Consequently, federal agencies often have little assurance that their used electronics are ultimately disposed of in an environmentally responsible manner. In our prior work, we found that little information exists, for example, on whether obsolete electronic products are reused, stored, or disposed of in landfills. If discarded domestically with common trash, a number of adverse environmental impacts may result, including the potential for harmful substances such as cadmium, lead, and mercury to enter the environment. If donated or recycled, these products may eventually be irresponsibly exported to countries without modern landfills and with waste management systems that are less protective of human health and the environment than those in the United States. For example, in our prior work we found that some U.S. electronics recyclers—including ones that publicly tout their exemplary environmental practices—were apparently willing to circumvent U.S. hazardous waste export laws and export certain regulated used electronic products to developing countries. The federal government’s approach to ensuring environmentally responsible management of used electronics has relied heavily on EPA’s FEC initiative, which, among other things, encourages federal facilities and agencies to manage used electronics in an environmentally safe way. In addition, executive orders were issued to strengthen federal agencies’ overall environmental management practices, including environmentally sound management of federal electronic products. The Office of Management and Budget (OMB), the White House Council on Environmental Quality (CEQ), and the Office of the Federal Environmental Executive (OFEE) each play important roles in providing leadership, oversight, and guidance to assist federal agencies with implementing the requirements of these executive orders. More recently, an interagency task force issued the July 2011 National Strategy for Electronics Stewardship, which is intended to lay the groundwork for enhancing the federal government’s management of used electronics. Over the past decade, the executive branch has undertaken several initiatives to improve federal agencies’ management of used electronics. Specifically, (1) EPA has led or coordinated several improvement initiatives and issued guidance aimed at improving the management of used federal electronic products, (2) GSA has issued personal property disposal guidance and instituted new requirements for electronics recyclers it has contracted with to dispose of federal electronic products, (3) the President has issued executive orders that established goals for improving the management of used federal electronics, and (4) an interagency task force issued the July 2011 National Strategy for Electronics Stewardship, which is intended to lay the groundwork for enhancing the federal government’s management of used electronics. EPA has led or coordinated several key improvement initiatives to assist agencies with the management of used federal electronics, including the FEC, the Federal Electronics Stewardship Working Group, and the establishment of electronics recycler standards for use in certification programs. Federal Electronics Challenge. In 2003, EPA, along with several other agencies, piloted the FEC.that encourages federal facilities and agencies to purchase environmentally friendly electronic products, reduce the impacts of these products during their use, and manage used electronics in an environmentally safe way. To participate, executive branch agencies or The FEC is a voluntary partnership program their facilities must register and sign an agency pledge to become an agency or facility FEC partner, or both. In general, agency partners are responsible for supporting their facilities’ efforts but do not have specific reporting requirements. Facility partners are required to submit a baseline survey of their electronics stewardship activities when they join the program. The survey is to include, among other things, a description of (1) what the entity does with electronic products that are no longer used; (2) which electronics recycling services it uses; and (3) what, if any, measures the entity has taken to ensure that the electronic products were recycled in an environmentally sound manner. Facility partners are also expected to report progress annually, and apply for recognition through FEC awards. FEC guidance directs participants to provide recipients of donated electronics with instructions on how to have the electronics recycled responsibly and how to verify that responsible recycling occurs— procedures known as “downstream auditing.” When donating used electronics, FEC instructs agencies and facilities to ensure that recipients contact local or state environmental or solid waste agencies to obtain a database of vendors who recycle used electronics once the equipment is no longer useful to the recipient organization. FEC also recommends that participating agencies and facilities instruct recipients to avoid arrangements with recyclers that are unable or unwilling to share references and cannot explain the final destination of the used electronics they collect. When recycling electronics, participants are to determine how much electronic equipment the recyclers actually recycle compared with the amount they sell to other parties. In addition, FEC instructs participants to physically inspect a potential recycler’s facilities. Used electronics in trash containers, for example, may indicate that the facility is not recycling it, and the presence of shipping containers may indicate that the facility exports it. To assist FEC partners, “FEC champions” are available to help regional federal facilities with their electronics management programs. FEC champions are EPA representatives who are selected based on geographic representation. Champions help federal facilities become FEC facility partners; access resources for managing electronic products, including FEC program information, fact sheets, and limited technical assistance; and receive recognition for improving electronics management programs. The Federal Electronics Stewardship Working Group. This working group coordinates interagency efforts to promote federal electronics stewardship. It also acts as an advisory board for the FEC program. During the working group’s monthly meetings, federal agencies have the opportunity to discuss best practices for implementing the FEC and other electronics stewardship initiatives within their respective agencies. The FEC Program Manager told us the working group meetings serve as a primary mechanism to facilitate communication with agency management regarding the FEC program. Most executive agencies have at least one representative serving with the working group. Standards for certification of recyclers. EPA has worked with the recycling industry and other entities to promote partnership programs that address the environmentally sound management of used electronic products. As we reported in July 2010, EPA convened electronics manufacturers, recyclers, and other stakeholders and provided funding to develop Responsible Recycling (R2) practices, so that electronics recyclers could obtain certification to show that they are voluntarily adhering to the adopted set of best practices for environmental protection, worker health and safety, and security practices. Certification for R2 practices became available in late 2009. The R2 practices identify “focus materials” in used electronic products, such as cathode-ray tubes or items containing mercury, that warrant greater care owing to their toxicity and associated risk if managed without the appropriate safeguards. Specifically, the practices require that recyclers and each vendor in the recycling chain (1) export products and components containing certain materials only to countries that can legally accept them, (2) document the legality of such exports, and (3) ensure that the material is being safely handled throughout the recycling chain. R2 practices also establish a “reuse, recover, dispose” hierarchy along the chain of custody for material handling. These practices require recyclers to test electronics diverted for reuse, and confirm that key functions of the unit are working before it may be exported. Without such testing and confirmation, these used electronics must be treated as though they are going to recycling and may not be exported unless the R2 exporting provisions for recycling are satisfied. Recognizing that some clients would not want their used electronics remarketed or reused, R2 practices also require recyclers to have systems in place to ensure that all such electronics processed can be recycled, rather than recovered for reuse. EPA encourages electronics recyclers to obtain certification to either R2 practices, or to e-Stewards, a separate voluntary certification program. e- Stewards was initiated by the Basel Action Network in 2008, and the first e-Stewards-certified facilities were announced in early 2010. The length and cost of the e-Stewards certification process depends on a facility’s size and whether it has a documented environmental management system in place. The authority for federal agencies to transfer research equipment, including computers, to educational institutions and nonprofit organizations was established in law in 1992. See 15 U.S.C. § 3710(i) (2011). The Computers for Learning program facilitates the transfer of excess federal computer equipment to schools and educational nonprofit organizations. The program implements Executive Order 12999, Educational Technology: Ensuring Opportunity for All Children in the Next Century, 61 Fed. Reg. 17,227 (Apr. 19, 1996). of the property for sale would be greater than the expected sales proceeds. More recently, GSA has instituted new requirements for electronics recyclers listed on the GSA Schedule. In February 2011, GSA began requiring proof of certification under either R2 or e-Stewards for new vendors seeking to provide recycling or disposal services for used electronic products under GSA’s environmental services schedule. According to GSA officials, they also identified 5 vendors, out of the 58 vendors on the schedule at that time, that were performing recycling or disposal services for used electronic products and provided these vendors with modified contract terms—making R2 or e-Stewards certification within 6 months a condition for performing these services under the GSA schedule. In January 2007, Executive Order 13423 established goals for federal agencies to improve the management of their used electronic products. Among other things, the executive order required that agency heads (1) establish and implement policies to extend the useful life of agencies’ electronic equipment and (2) ensure the agency uses environmentally sound practices with respect to the disposition of the agency’s electronic equipment that has reached the end of its useful life. Furthermore, the instructions for implementing the executive order, issued on March 28, 2007, called for each agency to develop and submit to OFEE by May 1, 2007, an electronics stewardship plan to implement electronics stewardship practices for all eligible owned or leased electronic products. Among other things, the plans were to address how agencies will ensure that all electronic products no longer needed by an agency are reused, donated, sold, or recycled using environmentally sound management practices at end of life. The implementing instructions called for agencies’ plans to comply with GSA procedures for the transfer, donation, sale, and recycling of electronic products (discussed above), as well as any applicable federal, state, and local laws and regulations; and use national standards, best management practices, or a national certification program for electronics recyclers. The implementing instructions for Executive Order 13423 also directed each agency and its facilities to participate in the FEC or to implement an equivalent electronics stewardship program that addresses the purchase, operation and maintenance, and end-of-life management strategies for electronic products consistent with the FEC’s recommended practices and guidelines. In October 2009, Executive Order 13514 built on the previous executive order but included slightly different goals for electronics stewardship. Executive Order 13514 calls for agencies to develop, implement, and annually update strategic sustainability performance plans to specify how they intend to achieve the goals of the order. Agencies were required to submit fiscal year 2010 plans to CEQ and OMB by June 2010. Executive Order 13514, however, did not supersede or revoke the earlier executive order, and that order’s goals and requirements remain in effect. In July 2011, an interagency task force,GSA, issued the National Strategy for Electronics Stewardship, which describes goals, action items, and projects that are intended to lay the groundwork for enhancing the federal government’s management of used electronic products, among other things. The strategy assigns primary responsibility for overseeing or carrying out most of the projects to either EPA or GSA. Most of the projects are scheduled for completion from summer 2011 through spring 2013. Among other things, the strategy directs GSA to issue co-chaired by CEQ, EPA, and through interagency collaboration and with public input, a comprehensive and governmentwide policy on used federal electronic products that maximizes reuse, clears data and information stored on used equipment, and ensures that all federal electronic products are processed by certified recyclers; and revised reporting guidance to improve federal agencies’ tracking of used federal electronic products throughout their life cycle and to post comprehensive data on Data.gov and other publicly accessible websites. The strategy also recommends that the federal government require and enable recipients of used federal equipment that has been sold, transferred, or donated for reuse to use certified recyclers and follow other environmentally sound practices to the greatest extent possible; and encourage electronics manufacturers to expand their product take- back programs, and use certified recyclers as a minimum standard in those programs by expanding the use of manufacturer take-back agreements in federal electronics purchase, rental, and service contracts. According to our review of agency documents and discussions with agency officials, federal agencies have made some progress to improve their management of used electronic products, as measured by greater participation in the FEC and an increase in certified electronics recyclers, but opportunities exist to expand their efforts. In addition, challenges remain that may impede agencies’ progress toward further improving their management of used federal electronics, including in the tracking and reporting of data on the disposition of used federal electronics, in clarifying agencies’ responsibility for used electronics sold through auctions, and in clarifying definitions for key terms and reconciling differences between the executive orders. Since we first reported on the FEC in November 2005, participation has grown from 12 agencies and 61 individual facilities to 19 agencies and 253 individual facilities, as of September 2011. However, participation still represents only about one-third of the federal workforce and, in some cases, participation means that an agency has identified its current practices for managing electronic products and set goals to improve them but has not reported on progress toward achieving these goals as required. Specifically, only a little more than half of the agencies and facilities that were registered as FEC partners submitted an annual accomplishment report in 2010 to demonstrate the agency or facility’s progress in electronics stewardship; these reports are a key component of actively participating as a partner. Because FEC participation is voluntary, EPA officials said EPA has no authority to require agencies to report on their progress. As a result, the extent to which agencies that do not report progress are reaching their goals is unknown. However, the FEC program manager told us that with a recent change in policy, FEC facility partners that do not submit their fiscal year 2011 annual reporting form by January 31, 2012, will be considered inactive. An FEC official stated that despite increased efforts to market the program, some agencies find the FEC’s reporting requirements to be time-consuming. For the five agencies we reviewed, participation in FEC varied. Specifically: DOD participates in the FEC as an agency partner, but the majority of its installations or facilities do not participate. According to EPA data, 16 of DOD’s approximately 5,000 installations participate in the FEC. DOD officials told us that they are conducting outreach to encourage installations to participate but that some installations may not participate because officials believe that the registration process is too rigorous and burdensome. NASA centers are allowed to participate in the FEC, but they are not required to do so because other agency initiatives accomplish the same goals, according to agency officials. Three of NASA’s 10 centers participate in the FEC. HUD does not participate in the FEC. We found that agency officials did not understand the FEC participation requirements. HUD’s electronics stewardship plan states that HUD participates in the FEC, but an EPA official, who is responsible for the FEC program, told us that HUD never registered to become a partner—which involves submitting a baseline survey of the agency’s electronics stewardship activities. In our discussions with HUD officials, we found that they were not aware of the FEC registration or reporting requirements and continued to believe that the agency was participating. DOE officials promote FEC participation, submit annual accomplishment reports, and actively participate in the FEC awards program. According to agency officials, over a 6-year period, 23 DOE facilities have won FEC awards, with many winning multiple times. All but two DOE facilities participate. Education participates in the FEC as an agency and facility partner. However, because it centrally manages the purchasing and disposition of electronics, Education submits annual accomplishment reports for the agency as a whole. For those agencies or facilities that actively participate in the FEC, participation can provide federal officials with the information and resources needed to provide greater assurance that their used electronics are disposed of in an environmentally responsible manner, according to EPA documents. For the five agencies we reviewed, officials at agencies or facilities that actively participated in the FEC said that the FEC provided invaluable support. For example, according to DOD officials at one installation, the information sharing that is facilitated through the FEC is one of the biggest benefits of participation—when faced with a problem, the FEC can provide information from other agencies that have faced comparable problems. Similarly, Education officials told us that membership in Federal Electronics Stewardship Working Group was very helpful. In addition, DOE officials said that they have had much success with the FEC program and that the FEC awards program has motivated many DOE facilities to participate in electronics recycling. Since the R2 and e-Stewards certification processes were made available in 2009 and 2010, respectively, the number of certified recyclers in the United States has grown greatly. From September 2010 to September 2011, the number of electronics recycling facilities certified to the R2 standard increased from 15 to 122 and the number of facilities certified to the e-Stewards standard grew from 6 to 40. Figure 1 shows the locations of the electronics recycling facilities in the United States that have obtained third-party certification as of September 30, 2011. The increased number of certified recyclers should make it easier for agencies to locate recyclers that will, among other things, ensure that any harmful materials are being safely handled throughout the recycling chain. For the five agencies we reviewed, almost no certified recyclers were used, and in most cases agency officials either misidentified a recycler’s certification status or indicated that they did not know the recycler’s certification status. According to our analysis of the disposition information these agencies provided, of the 25 electronics recycling companies that the five agencies reported using in fiscal year 2010, only one was certified by either R2 or e-Stewards for all locations where the agency used it as of September 30, 2010, and agencies were correct in identifying whether or not their recyclers were certified in only four cases. The confusion regarding electronics recyclers’ certification status could stem in part from the absence of clear guidance. The implementing instructions for Executive Order 13423 direct agencies to use national standards, best management practices, or a national certification program for recyclers. To date, however, none of the oversight agencies—OMB, CEQ, and OFEE—have provided agencies with clear guidance specifying whether R2 or e-Stewards, the two existing certification programs, qualify as “national certification programs for recyclers” under the implementing instructions. In an effort to address this issue, according to the National Strategy for Electronics Stewardship, EPA and GSA are to take steps to address the need for well-defined requirements for those certification programs that federal agencies will rely upon. Specifically, EPA, in consultation with GSA and other relevant agencies, is to develop a baseline set of electronics recycling criteria to ensure, among other things, that all downstream handlers of used electronics manage these materials in a way that protects the environment, public health, and worker safety. EPA is also to initiate a study of the current electronics certification programs to evaluate the strength of their audits of downstream facilities. According to the national strategy, as part of its effort to establish a comprehensive and governmentwide policy on used federal electronic products, GSA will consider the baseline set of criteria, the results of the study of current certification programs, and other requirements and considerations in determining which certification programs satisfy the governmentwide requirement to use certified recyclers. Although the strategy calls for GSA to, with public input, issue a revised policy and propose changes to the FMR, it is unclear if GSA is on track to do this by February 2012, given that it has not issued a public draft, nor conducted a public comment or other public input process. Similarly it is unclear when, if, or how GSA’s revised policy component regarding certified recyclers will be incorporated into the FMR. Moreover, it is unclear what mechanism GSA will use to issue the revised policy prior to its inclusion in the FMR, as the policy may not be in conformance with the current FMR. In addition, the national strategy does not specify if or how EPA and GSA will routinely update other federal agencies on the status of their efforts to implement the national strategy’s recommendations. Currently, due to challenges associated with the tracking and reporting of used federal electronics, the ultimate disposition of these electronics is unknown—making it difficult to measure the effectiveness of Executive Orders 13423 and 13514, which were aimed at improving the management of used federal electronics and ensuring the proper disposal of electronics that have reached the end of their useful life. The National Strategy for Electronics Stewardship acknowledges the challenges associated with tracking and reporting the disposition of used federal electronics and proposes some solutions for improving the data that agencies report to GSA. Under the national strategy, GSA is to streamline and standardize reporting through the annual Report of Non-Federal Recipientselectronic products leaving federal ownership, and the recipients of these products. It is unclear, however, what electronics the new reporting requirements will cover. The national strategy suggests that the annual Report of Non-Federal Recipients will be expanded to include the reporting of the disposition of electronic products to all recipients. Currently, the report includes only property donated to such nonfederal recipients as schools and state and local governments, and therefore does not include the disposition of significant quantities of electronics. to gather data on the type, quantity, and intended use of If GSA intends to use this report to capture agencies’ data, it is unclear how the report will improve the quality of the limited data GSA currently receives. GSA officials told us that while the agency currently collects disposition data from agencies through its GSAXcess database, GSAXcess is not an accountable property system; therefore, data validation is limited.agencies have not submitted reports to GSA on exchange/sale transactions and property furnished to nonfederal recipients, as currently required, or have not included all of the required information—thus presenting data challenges as GSA seeks to carry out its oversight and management responsibilities. According to a GSA bulletin, a number of executive The data challenges are further complicated by the fact that individual agency procedures for tracking electronics are not consistent. Agencies typically record the acquisition of electronics as individual units, such as desktop or laptop computers, and continue to track these electronics as individual units while in use at the agency. However, when agencies dispose of these same electronics, they may use a different method for tracking them. For example, rather than tracking the disposition of used electronic products as individual units, agencies may aggregate a number of similar items into a single line item or they may report them by weight. In addition, a single agency may use different metrics for different types of disposition. For example, DLA, a DOD acquisition and disposition agency, tracks electronic products sent to recyclers in pounds and electronic products disposed of through other means—such as donated to schools or transferred to other agencies—by individual unit. Because some electronics are tracked and reported as line items and some are recorded in pounds, it is not possible to compare the extent to which the agency relies on one disposition method over another. For the five agencies we reviewed, data provided to us on the disposition of electronic products were similarly inconsistent, which hampered our efforts to accurately assess the extent to which electronic products procured by these federal agencies were disposed of in an environmentally sound manner. GSA’s personal property disposition procedures do not clarify agency responsibilities for tracking or placing contract conditions on the ultimate disposition of used electronics if they are sold through auctions. As we reported in August 2008, some electronics recyclers in the United States—including those that have purchased government electronics sold through auction—appeared willing to export regulated electronics illegally. We identified two auction disposal methods—those used by GSA and by DOD—that could result in used federal electronics being handled in an environmentally risky manner. Specifically, under the GSA auction process, registered participants can bid electronically on items within specific time frames. To participate, potential buyers register with GSA by providing information about themselves, such as name, address, and payment information, before they can bid on items, according to GSA officials. However, GSA officials told us that they do not evaluate the information obtained from buyers to determine whether they are brokers or resellers who might potentially export these used products to other countries where they may not be handled in an environmentally sound or safe manner. Moreover, GSA officials stated that the agency does not have enforcement authority after these items are sold to the general public. They told us that if GSA is made aware of any inappropriate activity or violations of the terms of the sale, it will refer the information to the GSA Inspector General for further investigation. According to agency documentation, GSA’s online auction procedures include standard sales terms and conditions, special security notifications, and export control clauses. However, none of the terms, conditions, or clauses included in GSA’s auction procedures are are aimed at ensuring that (1) electronics containing certain materials exported only to countries that can legally accept them,document the legality of such exports, and (3) the material is being safely handled throughout the recycling chain. Unlike GSA, DOD is not directly involved in the auction process but instead sells its used electronics to a private company, which then resells the used electronics through its web-based auction process. According to DOD officials, DOD’s responsibility for tracking its used electronics ends once it passes to the contractor—Government Liquidation. DOD officials said that Government Liquidation has its own terms and conditions that bidders must adhere to once they purchase the used electronics. As with GSA auctions, the terms and conditions included in the Government Liquidation auctions are not aimed at ensuring that used federal electronics are exported only to countries that that can legally accept them. In our review of these auction websites, we found that the overwhelming majority of used electronic products are sold in bulk, which would indicate that they are being sold to brokers or resellers, not individual consumers. The National Strategy for Electronics Stewardship seeks to address the problems associated with used federal electronics sold through auction. According to the strategy, the electronics stewardship policy that GSA is to establish will prohibit the sale of nonfunctional electronics through public auction except to third-party certified recyclers and refurbishers. Functional electronics are to be directed through the existing hierarchy of transfer, donation, and sale. It is unclear, however, how this policy will work in practice. Currently, agencies sell electronics in mixed lots of potentially functional and nonfunctional equipment. For example, officials at one agency said that it was not cost effective to test items to ensure that they are functional; therefore, items are sold through GSA “as is” with no implied warranty. These agency officials said that they combine items in sales lots that will bring the most return to the federal government. In addition, we found that electronics listed on the Government Liquidation and GSA auction websites are frequently marketed as “tested to power-up only,” or with disclaimers such as “condition of the property is not warranted.” Under the national strategy, it is unclear whether electronics characterized in this way would qualify as “functional.” In addition, the national strategy does not provide clear and detailed criteria to assist federal agencies in bundling functional and nonfunctional electronics for sale exclusively to certified recyclers or refurbishers, distinguishing between functional and nonfunctional electronics by conducting specific tests, and labeling electronic products. Moreover, if federal agencies sell used functional electronic products through auctions, neither the agency nor the auction entities are required to impose conditions or to perform due diligence by conducting auditing to determine whether all downstream reusers of such products follow environmentally sound end-of-life practices. In contrast, the European Union has detailed guidance for determining the functionality of electrical and electronic equipment, as part of distinguishing whether the equipment is considered waste in the context of import-export rules. The guidance states that the tests required to determine functionality depend on the type of electronics, but generally, completion of a visual inspection without testing functionality is unlikely to be sufficient for most types of electronics; it also states that a functionality test of the key functions is sufficient. The guidance also identifies defects that materially affect functionality and would therefore cause an item to be considered “waste” if, for example, the equipment did not turn on, perform internal set-up routines, or conduct self-checks. As discussed previously, R2 practices establish a similar “reuse, recover, dispose” hierarchy along the chain of custody for material handling and require recyclers to test electronics diverted for reuse, and confirm that key functions of the unit are working before it may be exported. We found that key terms concerning electronics have not been defined and that differences between the executive orders have not been clarified. In particular: Key terms not defined. Key terms such as “electronic product” and “environmentally sound practices” are not explicitly defined in the executive orders, the guidance provided to agencies for implementing the executive orders, or the National Strategy for Electronics Stewardship. Consequently, each of the agencies we reviewed used its own definition of electronic products to report progress in implementing policies for electronics stewardship. For example, DOE defines electronic products as printers, desktop computers, notebook computers, and monitors; DOD, Education, HUD, and NASA use broader definitions that include servers, routers, and switches; cell phones and musical instruments; and refrigerators. Moreover, without a clear definition of what constitutes an environmentally sound practice, agencies are free to dispose of their used electronics through online auctions or other means that provide little assurance that (1) these electronics are exported only to countries that can legally accept them, (2) recyclers document the legality of such exports, and (3) the material is being safely handled throughout the recycling chain. Differences between the executive orders have not been clarified. CEQ has not issued implementing instructions regarding electronics stewardship for Executive Order 13514, which was signed in 2009, and CEQ, OMB, and OFEE have not harmonized the electronics stewardship requirements contained in executive orders 13423 and 13514. For example, under Executive Order 13423, the requirement to use environmentally sound practices applies to electronic equipment that has “reached the end of its useful life,” whereas Executive Order 13514 includes “all agency excess or surplus electronic products,” and the difference between these terms has not been clarified. In addition, the implementing instructions for Executive Order 13423 direct agencies to ensure that contracts for leased electronic equipment incorporate language that requires that at the end of the lease period, the equipment is reused, donated, sold, or recycled using environmentally sound management practices. This directive is not included in Executive Order 13514 nor in the guidance provided to agencies for preparing their strategic sustainability performance plan that is to be used under Executive Order 13514. Officials from these oversight agencies told us that they have informed federal agencies that electronics stewardship plans under Executive Order 13423 can be incorporated by reference into their strategic sustainability performance plans to satisfy certain requirements for Executive Order 13514. Or alternatively, strategic sustainability performance plans may be used in lieu of separate electronics stewardship plans. However, CEQ, OMB, and OFEE have not addressed differences or updated the implementing instructions for Executive Order 13423. Federal initiatives to improve the management of agencies’ used electronics—including the FEC, certification for recyclers, personal property disposal guidance, the executive orders, and the National Strategy for Electronics Stewardship—have sought to assist federal agencies in the handling of used electronic products. And progress has been made. More agencies and facilities are participating in the FEC, and a growing number of recyclers have received third-party certification. However, opportunities exist to increase the breadth and depth of agencies’ participation in the FEC and to expand the use of certified electronics recyclers. Federal agencies also face challenges that may impede their progress toward improving their management of used federal electronics. Specifically, 2 years have elapsed since Executive Order 13514 required CEQ to issue implementing instructions. In the absence of such instructions, agencies do not have definitions for key terms such as “electronic products” and “environmentally sound practices,” and the guidance for implementing the executive orders provides inconsistent information on what procedures an agency should follow when implementing environmentally sound practices. In addition, inconsistencies between Executive Orders 13514 and 13423 have yet to be addressed; without doing so, CEQ lacks assurance that agencies are meeting electronics stewardship requirements of both orders, given that CEQ and OMB permit agencies to comply using either an electronics stewardship plan under Executive Order 13423 or a strategic sustainability performance plan under Executive Order 13514. Furthermore, without consistent tracking and reporting of the disposition of used federal electronics, there is no mechanism to measure the effectiveness of federal policies aimed at ensuring the proper disposal of electronics that have reached the end of their useful life. The recently issued National Strategy for Electronics Stewardship seeks to advance federal agencies’ efforts to manage used electronics. However, it is unclear whether it will fully address challenges that impede environmentally sound management of used federal electronics. Furthermore, it is doubtful whether the strategy will be effective without a mechanism for routinely keeping agencies and the public apprised of its progress toward establishing a governmentwide policy on used federal electronics—particularly with respect to use of third-party national certification for electronics recyclers—so that agencies have a clear understanding of their responsibilities and other interested parties are apprised of agencies’ progress toward completing actions identified in the strategy. Currently, the strategy does not state how agencies will be kept informed of implementation efforts. In addition, the strategy lays out an approach for ensuring that federal agencies dispose of nonfunctional electronics in a sound manner, but it does not provide clear and detailed criteria to assist federal agencies in bundling functional and nonfunctional equipment for sale exclusively to certified recyclers and refurbishers and distinguishing between functional and nonfunctional electronics by conducting specific tests and labeling electronic products. Finally, if federal agencies sell used functional electronic products through auctions, neither the agency nor the auction entities are required to perform due diligence by conducting auditing to determine whether all downstream reusers of such products follow environmentally sound end- of-life practices. To improve federal electronics stewardship, we are making the following four recommendations. To support federal agencies’ efforts to improve electronics stewardship, we recommend that the Director of the White House Council on Environmental Quality, in collaboration with the Director of the Office of Management and Budget, and the Administrator of the General Services Administration collaborate on developing and issuing implementing instructions for Executive Order 13514 that define key terms such as “electronic products” and “environmentally sound practices;” address inconsistencies between this executive order and Executive Order 13423; and as appropriate, provide clear direction on required agency actions under the national strategy; and require consistent information tracking and reporting on the disposition of used electronics among agencies. To provide transparency on progress toward completing the actions identified in the National Strategy for Electronics Stewardship, we recommend that the Director of the White House Council on Environmental Quality, the Administrator of EPA, and the Administrator of GSA provide quarterly status updates on a publicly accessible website. To ensure that electronic products procured by federal agencies are appropriately managed, we recommend that GSA include measures in its policy to ensure that all electronics sold through auction are appropriately managed once they reach the end of their useful lives. Such measures could include bundling functional and nonfunctional equipment for sale exclusively to certified recyclers, who would be responsible for determining the best use of the equipment under the “reuse, recover, dispose” hierarchy of management; or if agencies or GSA are to be responsible for screening electronics for auction and distinguishing between functional and nonfunctional equipment, providing clear and detailed criteria for doing so, such as specific testing and labeling; and ensuring that purchasers or recipients of functional electronic products sold through government auctions use certified recyclers or perform due diligence and conduct downstream auditing. We provided a draft of this report to OMB, CEQ, GSA, and EPA for review and comment. In addition, we provided DOD, DOE, Education, HUD, and NASA with excerpts of the draft report that pertained to each agency and incorporated technical comments received as appropriate. In written comments, which are reproduced in appendix II, EPA generally concurred with our recommendations. OMB, CEQ, and GSA did not provide written comments to include in our report. Instead, in e-mails received on February 1, January 19, and January 17, 2012, from the agencies’ respective liaisons, OMB, CEQ, and GSA generally concurred with our recommendations. Even with their general concurrences, in some instances, the agencies proposed alternative approaches for executing the recommendations. In the e-mail from its liaison, OMB concurred with the comments in the e-mail from CEQ’s liaison but did not provide additional comments of its own. In response to our recommendation that CEQ, in collaboration with OMB and GSA, issue implementing instructions for Executive Order 13514 that define key terms; require consistent information tracking and reporting; and provide clear direction on required agency actions under the national strategy, CEQ stated that it would reserve its decision regarding our recommendation until after GSA issues its comprehensive governmentwide policy on electronic stewardship. Specifically, CEQ stated that GSA’s policy would address the issues we identified with regard to unclear definitions and inconsistent tracking and reporting of electronics but was silent on how it would provide clear direction on required agency actions under the national strategy. GAO believes it is imperative for CEQ to issue implementing instructions along with GSA’s issuance of its policy. Without such instructions, agencies will lack clarity on required agency actions under the national strategy and whether adhering to the GSA policy is necessary and/or sufficient for implementing the executive order. Moreover, it remains unclear what mechanism GSA will use to issue its revised policy prior to its inclusion in the FMR, to the extent the current FMR does not conform with the new policy. Concerning this issue, GSA stated that it will publish guidance documents concurrent with proposing changes to the FMR. However, as GSA intends to issue guidance documents, which are not legally binding on agencies, as well as regulations, which are, it will be important for CEQ to issue implementing instructions that indicate which actions in the guidance documents, as well as any other actions beyond those in the FMR, are necessary to comply with the executive order. In addition, as we recommended, CEQ, EPA, and GSA agreed that they would update a publicly accessible website on the status of progress toward completing the actions identified in the National Strategy for Electronics Stewardship. CEQ stated that progress reporting would be accomplished by GSA and GSA agreed to provide status updates at least quarterly. However, in its written comments, EPA requested that, instead of quarterly status updates, we revise our recommendation to require status updates as significant progress is made or key milestones are met. EPA stated that due to the nature of some of the work the agencies have committed to as part of the national strategy, it may not be appropriate to report to the general public on a routine basis. We did not revise the recommendation and are not recommending such disclosure. Instead, we are recommending that the agencies provide a quarterly status update that characterizes the progress made toward achieving each action item or project. For example, one action item in the national strategy directed the Federal Electronics Stewardship Working Group to recommend to CEQ by November 18, 2011, metrics and other reporting tools to measure agencies’ progress in implementing the revised Federal Electronics Stewardship Policy. It would be helpful to have updated information on whether the working group has made its recommendation to CEQ and when CEQ will announce the new metrics and reporting tools. Currently, such information is not publicly available. In fact, as of February 8, 2012, more than 6 months after the policy and benchmarks were issued, no updates have been provided on publicly accessible websites. With regard to our recommendation that GSA include measures in its electronic stewardship policy to ensure that all electronics sold through auction are appropriately managed once they reach the end of their useful lives, in the e-mail received from its liaison, GSA noted that the agency is working toward this goal. Specifically, GSA stated that it is working toward including measures to (1) bundle all equipment for sale to certified recyclers, who then determine proper reuse or recycling, or (2) provide agencies with clear, detailed criteria to distinguish between functional and nonfunctional electronics and ensure that purchasers or recipients of federal electronics use certified recyclers or perform downstream auditing, while also noting that GSA has limited authority to require recipients of used federal electronics to recycle them once ownership has transferred to those recipients. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution for 30 days from the report date. At that time, we will send copies to the Secretaries of Defense, Education, Energy, and Housing and Urban Development; the Administrators of EPA, GSA, and NASA; the Director of OMB; the Chair of the White House CEQ; the Federal Environmental Executive; appropriate congressional committees; and other interested parties. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. GAO staff who made major contributions to this report are listed in appendix III. The objectives for this report were to examine (1) key initiatives aimed at improving the management of used federal electronics and (2) improvements resulting from these initiatives and challenges that impede progress toward improving the management of used federal electronics, if any. To identify initiatives aimed at improving the management of used federal electronics, we reviewed guidance and other documents describing the Environmental Protection Agency (EPA) initiatives related to the Federal Electronics Challenge (FEC), the Federal Electronics Stewardship Working Group, and Responsible Recycling (R2) practices. We analyzed the requirements for electronic products contained in the applicable executive orders and implementing instructions that make up the federal policy framework; the Federal Acquisition Regulation, which governs the process through which the federal government acquires goods and services; the Federal Management Regulation (FMR), which, among other things, regulates the disposal of federal personal property, including electronics; and the General Services Administration’s (GSA) Personal Property Disposal Guide, which serves as an index and quick-reference guide as it relates to personal property management provisions in the FMR; and other relevant electronics stewardship guidance. We also reviewed the July 2011 National Strategy for Electronics Stewardship. To identify improvements resulting from federal initiatives to improve management of used federal electronics and challenges that impede progress, we selected a nonprobability sample of five federal agencies— the departments of Defense (DOD), Energy (DOE), Education (Education), and Housing and Urban Development (HUD); and the National Aeronautics and Space Administration (NASA)—to examine how the federal policy framework is carried out in those agencies. We selected DOD, DOE, and NASA because they each participated to some extent in the FEC program and purchased large amounts of electronic products— ranking first, eighth, and tenth, respectively, in terms of overall federal agency information technology spending in fiscal year 2010. We selected Education because, according to the FEC program manager, the agency actively participates in the FEC and centrally manages its electronics procurement and disposal functions. We selected HUD because the agency was not participating in the FEC. We used FEC participation as a selection criterion because we hoped to include agencies with a range of experience with managing used electronics in an environmentally safe way. Because the selection of agencies was based on a nonprobability sample, the information we obtained is not generalizable to all federal agencies. However, because the nonprobability sample consists of a cross-section of agencies of different sizes and levels of participation in the FEC, the evaluation of these agencies provides relevant examples of different procurement and disposition methods for electronics. For these five agencies we also collected and reviewed fiscal year 2010 strategic sustainability performance plans. We also conducted semistructured interviews with officials from the Office of Management and Budget (OMB), the White House Council on Environmental Quality (CEQ), the Office of the Federal Environmental Executive (OFEE), and EPA to discuss their respective roles in assessing agency performance and managing the FEC and other federal initiatives for electronics stewardship. In some cases, we followed up the interviews with additional questions, and on two occasions, CEQ provided us with written responses to some of our questions on the roles of OMB, CEQ, and OFEE and other issues on federal electronics stewardship, such as how OMB and CEQ decide on whether an agency’s program is equivalent to the FEC. In addition, at GSA, we conducted semistructured interviews with officials on the agency’s policies and procedures for the transfer, donation, sale, and recycling of electronic products. To determine the extent to which agencies used various disposition methods (i.e., reuse, donation, and sale) we analyzed governmentwide GSA data from GSAXCess, Exchange Sale, and Non-Federal Recipients reports for fiscal year 2010. We designed and implemented a data collection instrument to collect agency-specific disposition data for fiscal years 2009 and 2010 from the five agencies selected for our nonprobability sample. We encountered a number of limitations in obtaining reliable data. For example, GSA officials acknowledged that GSA does not verify the data that it collects from other agencies. The five selected agencies that we collected data from also did not have consistent definitions of electronics and sometimes reported inconsistent information or used inconsistent methods of tracking the disposition of used electronics. For example, DOD tracks some items by weight and other items by line item. We attempted to resolve inconsistencies in the data provided through this effort through follow-up efforts with the five agencies in which we discussed how they attempted to collect the data we requested and related challenges and limitations. Based on these conversations, we determined that the data were not sufficiently reliable for the purposes of reporting on amounts of electronics disposed of by the five agencies and we did not use information collected in the data collection instrument on the extent to which agencies used various disposition methods. We also visited the Kennedy Space Center, in Cape Canaveral, Florida, and Defense Logistics Agency (DLA) Aviation in Richmond, Virginia, to discuss the procurement and disposition of electronic products. We selected Kennedy Space Center because it is designated as NASA’s Principal Center for Recycling and Sustainable Acquisition. We selected DLA Aviation in Richmond, Virginia, because of its role in disposing of excess property received from the military services through DLA Disposition. We also visited a UNICOR recycling facility located in Lewisburg, Pennsylvania, as well as two private electronics recycling facilities located in Tampa, Florida. We selected these facilities because of their role in electronics recycling at federal agencies. At these facilities, we interviewed officials about the procedures involved in recycling used federal electronic products and observed the electronics recycling process to learn how electronics are safely disassembled and, in some cases, processed for reuse. To assess the extent to which the July 2011 National Strategy for Electronics Stewardship addresses any challenges that may impede participation in electronics stewardship initiatives, we examined key provisions of the strategy, such as dividing functional and nonfunctional electronics, and compared these provisions with existing policies for electronics stewardship. In response to our request for information on electronics stewardship, FEC program’s manager, officials within each of the five agencies, and seven champions for the FEC program provided information on the challenges that may affect agency participation in electronics stewardship initiatives. In addition, we interviewed officials with the R2 and e-Stewards recycler certification programs,Electronics TakeBack Coalition, and an electronics recycler to determine the extent to which recyclers in the United States have obtained certification and to discuss their views about the capacity of certified electronics recyclers located in the United States. We conducted this performance audit from October 2010 to January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Diane LoFaro, Assistant Director; Elizabeth Beardsley; Pamela Davidson; Stephanie Gaines; Deborah Ortega; Ilga Semeiks; Carol Herrnstadt Shulman; and Vasiliki Theodoropoulos contributed to this report. Green Information Technology: Agencies Have Taken Steps to Implement Requirements, but Additional Guidance on Measuring Performance Needed. GAO-11-638. Washington, D.C.: July 28, 2011. Data Center Consolidation: Agencies Need to Complete Inventories and Plans to Achieve Expected Savings. GAO-11-565. Washington, D.C.: July 19, 2011. Electronic Waste: Considerations for Promoting Environmentally Sound Reuse and Recycling. GAO-10-626. Washington, D.C.: July 12, 2010. Federal Electronics Management: Federal Agencies Could Improve Participation in EPA’s Initiatives for Environmentally Preferable Electronic Products. GAO-10-196T. Washington, D.C.: October 27, 2009. Electronic Waste: EPA Needs to Better Control Harmful U.S. Exports through Stronger Enforcement and More Comprehensive Regulation. GAO-08-1044. Washington, D.C.: August 28, 2008. Electronic Waste: Strengthening the Role of the Federal Government in Encouraging Recycling and Reuse. GAO-06-47. Washington, D.C.: November 10, 2005. Electronic Waste: Observations on the Role of the Federal Government in Encouraging Recycling and Reuse. GAO-05-937T. Washington, D.C.: July 26, 2005.
The Environmental Protection Agency (EPA) estimates that across the federal government 10,000 computers are discarded each week. Once these used electronics reach the end of their original useful lives, federal agencies have several options for disposing of them. Agencies generally can donate their reusable electronics to schools; give them to a recycler; exchange them with other federal, state, or local agencies; or sell them through selected public auctions, including auctions sponsored by the General Services Administration (GSA). As the world’s largest purchaser of information technology, the U.S. government, through its disposition practices, has substantial leverage to influence domestic recycling and disposal practices. GAO was asked to examine (1) key initiatives aimed at improving the management of used federal electronics and (2) improvements resulting from these initiatives and challenges that impede progress, if any. To do this, GAO evaluated federal guidance and policy, as well as guidance and initiatives at five selected agencies. GAO selected agencies based on, among other things, the amount of electronics purchased. Over the past decade, the executive branch has taken steps to improve the management of used federal electronics. Notably, in 2003, EPA helped to pilot the Federal Electronics Challenge (FEC)—a voluntary partnership program that encourages federal facilities and agencies to purchase environmentally friendly electronic products, reduce the impacts of these products during their use, and manage used electronics in an environmentally safe way. EPA also led an effort and provided initial funding to develop third-party certification so that electronics recyclers could show that they are voluntarily adhering to an adopted set of best practices for environmental protection, worker health and safety, and security practices. In 2006, GSA issued its Personal Property Disposal Guide to assist agencies in understanding the hierarchy for disposing of excess personal property, including used electronic products: reutilization, donation, sale, and abandonment or destruction. In 2007 and 2009, executive orders were issued that, among other things, established improvement goals and directed agencies to develop and implement improvement plans for the management of used electronics. The Office of Management and Budget, the Council on Environmental Quality, and the Office of the Federal Environmental Executive each play important roles in providing leadership, oversight, and guidance to assist federal agencies with implementing the requirements of these executive orders. To lay the groundwork for enhancing the federal government’s management of used electronic products, an interagency task force issued the July 2011 National Strategy for Electronics Stewardship. The strategy, which describes goals, action items, and projects, assigns primary responsibility for overseeing or carrying out most of the projects to either EPA or GSA. Federal agencies have made some progress to improve their management of used electronic products, as measured by greater participation in the FEC and an increase in certified electronics recyclers, but opportunities exist to expand their efforts. For instance, agency participation in the FEC represents only about one-third of the federal workforce. GAO identified challenges with the tracking and reporting on the disposition of federal electronic equipment. For the five agencies GAO reviewed (Departments of Defense, Energy, Education, and Housing and Urban Development and the National Aeronautics and Space Administration), data provided on the disposition of electronic products were inconsistent, which hampered GAO’s efforts to accurately assess the extent to which electronic products procured by federal agencies are disposed of in an environmentally sound manner. Challenges associated with clarifying agencies’ responsibility for used electronics sold through auctions also remain. Currently, neither the agency nor the auction entities are required to determine whether purchasers follow environmentally sound end-of-life practices. Not having controls over the ultimate disposition of electronics sold through these auctions creates opportunities for buyers to purchase federal electronics and export them to countries with less stringent environmental and health standards. Other challenges that may impede progress toward improving federal agencies’ management of used electronics include defining key terms such as “electronic product” and “environmentally sound practices,” as each agency uses its own definition of electronic products to report progress in implementing policies for electronics stewardship. GAO recommends, among other things, that the White House Council on Environmental Quality, the Office of Management and Budget, and GSA take actions to require consistent tracking and reporting of used electronics and ensure appropriate management of electronics sold at auction. Each agency concurred with GAO’s recommendations but, in some instances, proposed alternatives for executing the recommendations.
USAID and its partners implement a variety of conditional food aid activities through development and emergency projects, including maternal and child health care and nutrition, food-for-training, and food- for-assets activities, among others. Such activities are intended to achieve a variety of objectives. For example, maternal and child health care and nutrition activities associated with conditional food aid seek to address major health risks faced by mothers and children by providing special rations in exchange for their attendance at health-related sessions focusing on topics such as infant development. Food-for-training activities provide food in exchange for participation in, for example, agricultural training sessions intended to help recipients learn the skills necessary to increase food productivity. Food-for-assets activities provide food in exchange for participation in activities focused on constructing community assets, such as roads or irrigation systems. Table 1 lists and describes the types of conditional food aid activities implemented through Title II projects. Food for assets was one of the most prevalently used types of conditional food aid in Title II development and emergency projects in fiscal years 2013 and 2014. According to WFP, implementing partners, and subject matter experts in the field of international food aid, food-for-assets activities have both advantages and disadvantages. For example, according to some experts, a major advantage of these activities is that, by design, the individuals who can benefit the most are those most likely to participate, for instance, because they may lack other employment opportunities—that is, those who most need the food are generally the most willing to perform the required work, while those who do not need the food are less motivated. According to implementing partners, including WFP, food-for-assets activities also create community infrastructure, such as rural roads and irrigation canals, that provides benefits to the wider community. For instance, irrigation canals can help increase farm productivity, and rural roads can provide access to markets where farmers can sell produced goods. According to implementing partners, including WFP, beneficiaries participating in such activities can learn building and maintenance skills that can also be used to help their communities become more resilient when food shortages occur. At the same time, some experts have expressed concern that food-for- assets activities can benefit those who are not among the neediest or can fail to include the neediest, such as the elderly and those who are not able-bodied. In addition, a critique by WFP questions whether the dual goal of providing food to help meet beneficiaries’ nutritional needs in the short term, while also building assets to help communities increase their resilience in the longer term, could make it difficult to accomplish either goal. According to experts and WFP officials, conditional food aid activities come with additional costs, such as the cost of purchasing concrete and other materials to build irrigation canals. These costs can reduce the partner’s ability to supply food aid. Finally, expert, implementing partner, and WFP stakeholders expressed the concern that the assets created through these activities are not easily sustained over the long term. For example, in a 2014 synthesis of evaluations of food-for- assets activities in 2002 through 2011, WFP reported that ongoing operations and maintenance are required to ensure that assets remain functional and useful. Additionally, the WFP evaluators found that assets might not be properly constructed or maintained if the technical expertise and specialized equipment needed for the assets were too complicated for the community. USAID does not track the use of conditional food aid in Title II projects, although our comprehensive review of USAID data found that most Title II projects included conditional food aid in fiscal years 2013 and 2014. Despite the prevalence of conditional food aid activities, USAID does not systematically collect or use data on conditional food aid provided through Title II projects and, as a result, could not readily provide data on the use of these activities in USAID’s projects. Our review of available USAID data for fiscal years 2013 and 2014 found that 111 of 119 Title II development and emergency projects included conditional, as well as unconditional, food aid activities and that funding for these projects totaled $2.1 billion—87 percent of all USAID funding for Title II projects during this period. USAID and its implementing partners implemented various conditional food aid activities through these projects, including food for assets, food for training, and maternal and child health care and nutrition. However, without the ability to identify all conditional food aid activities, USAID cannot reliably oversee the projects that use it. USAID does not systematically collect data specific to conditional food aid activities in Title II development and emergency projects. As a result, it took USAID several months to identify, and provide information about, the projects that included conditional food aid activities. For example, USAID could not readily identify the types of activities that the projects included and could not provide data on the resources used for these activities. USAID lacks data specific to these activities because it does not require development projects partners to report them and does not track information about these activities that WFP submits. Development projects. USAID does not require implementing partners to report on activities, beneficiaries, or financial resources applied to conditional food aid activities. Instead, partners are required to report data based on program elements—common categories used throughout foreign assistance projects to aggregate information for reporting purposes—such as civic participation, maternal and child health, natural resources and biodiversity, and agricultural activity. According to USAID officials, these program elements often include multiple conditional food aid activities in addition to general food distribution, training, and other activities unrelated to conditional transfers. Emergency projects. USAID does not systematically track data about the types of conditional food aid activities that WFP implements through USAID-funded Title II emergency projects, although WFP’s annual standard project reports contain this information. However, the WFP reports do not provide, and USAID does not have access to, data specific to WFP’s conditional food aid activities supported by U.S. contributions. Since the United States may be one of multiple donors for WFP’s emergency projects, USAID cannot determine the percentages of its contributions that support particular aspects of these projects. Because information about conditional food aid in Title II projects was not readily available, USAID officials spent several months gathering and revising the data we requested to determine (1) which Title II development and emergency projects contained conditional food aid activities in fiscal years 2013 and 2014, (2) how much money USAID contributed to these projects, (3) how many beneficiaries participated in each project, and (4) what quantities of commodities USAID provided for these projects. Despite these limitations, we were able to estimate the beneficiaries and metric tonnage associated with Title II development awards that included conditional food aid. We gathered project-level data on beneficiaries since USAID lacked data on the beneficiaries of U.S. conditional food aid activities. In addition, we collected data on food used for general emergency food distribution, as USAID did not have data about the number of metric tons of food donated by the United States that was distributed specifically through conditional food aid activities. Finally, we gathered data on food that was shipped from the United States, purchased locally, or otherwise purchased, since USAID lacked information about the metric tons of food distributed by emergency programs for conditional food aid. According to chapter 203 of USAID’s Automated Directives System (ADS), USAID operating units must strive to continuously learn and improve their approach to achieving results in order to meet development goals. The ADS states that evaluation is the systematic collection and analysis of information as a basis for judgments to improve programs’ effectiveness, to inform decisions about current and future programming, or both. The ADS also states that the purpose of strong evaluation and performance monitoring practices is to apply learning gained from evidence and analysis. Without tracking the use of conditional food aid, USAID cannot identify the scope of conditional food aid activities implemented under Title II. Moreover, USAID cannot readily identify Title II projects that include conditional food aid activities or report the dollars awarded for these activities, the number of beneficiaries served, or the metric tons of commodities used. Additionally, without the ability to collect information about the resources being used to implement conditional food aid activities, USAID cannot reliably monitor or evaluate these activities to learn systematically from their use. Although USAID was unable to provide data about the amounts of Title II funding that were used for conditional food aid activities, our comprehensive review of available USAID data found that in fiscal years 2013 and 2014, 98 percent of USAID-funded Title II development projects and 88 percent of Title II emergency projects included these activities. USAID awarded a total of $2.4 billion in Title II funds, including $2.1 billion for projects that included conditional food aid activities. Table 2 shows the countries where USAID-funded development and emergency projects included conditional food aid activities in fiscal years 2013 and 2014. Our analysis showed that the conditional food aid activities implemented in fiscal years 2013 and 2014 included six types of activities—food for assets, maternal and child health care and nutrition, school feeding, food for training, take-home rations, and food for education. Of these activity types, food for assets was the most prevalent for development and emergency projects in aggregate, implemented in 87 of 119 (73.1 percent) of projects (see fig. 1). In development projects, food for assets and maternal and child health care and nutrition activities were equally prevalent, followed by food for training. In emergency projects, food-for- assets activities were most prevalent, followed by school feeding and food for training, respectively. Moreover, partners implemented some food-for-assets activities in conjunction with other conditional food aid activity types, such as maternal and child health care and nutrition activities, to improve a community’s food security. For example, during our fieldwork in Guatemala, we observed the implementation of a Preventing Malnutrition in Children under 2 Years of Age activity that provided fortified rations to participants and assisted the community in developing gardens and learning animal husbandry techniques to promote egg production. The implementing partner also provided cooking demonstrations to teach mothers how to prepare food for their young children using the fortified rations, vegetables from the garden, and eggs. In the same community, another partner was implementing a food-for- assets activity that provided food in exchange for beneficiaries’ participation in community councils and other community-building activities. Implementing partners used food-for-assets activities to construct a variety of communal assets. During our fieldwork in Ethiopia and Djibouti, we observed examples of such assets, including small-scale dams and irrigation canals, rural access roads, and a school facility, constructed through food-for-assets activities (see fig. 2). For more information about award amounts, beneficiaries, and metric tonnage, see app. III. Implementing partners of Title II development projects reported considering a number of factors, as well as experiencing challenges, in designing food-for-assets activities. For example, partners reported considering stakeholder input and the availability of technical expertise in designing their food-for-assets activities. Partners also identified a number of challenges to designing these activities, such as an inability to serve all of the most food-insecure people in a region and determining a plan for community maintenance and use of the assets after the project has ended. Implementing partners reported considering multiple factors when designing food-for-assets activities for Title II development projects. To identify these factors, we asked 10 partners that implemented 14 projects with food-for-assets activities in fiscal year 2014 to respond to a checklist of potential factors; we also asked the partners to identify during interviews the factors they considered most important (see fig. 3). All of the implementing partners indicated that they had considered some form of stakeholder input. As shown in figure 3, all of the partners also identified the availability of technical expertise as a factor that they considered when designing food-for-assets activities for the 14 projects we reviewed. Two of these partners explained that the availability of expertise in the local market and in their organizations to oversee the technical design and implementation of assets are among the most important factors that they consider when designing food-for-assets activities. Specifically, in Ethiopia, a partner and its subawardee told us that they had developed a construction plan to secure cement, sand, and stone for a dam and irrigation canal to be constructed through a food-for- assets activity. The partner spent 4 months training beneficiaries in construction, irrigation maintenance, and water management and employed a full-time foreman at the construction site to oversee construction. As a result, according to the partner, an engineer estimated that the structure would last 15 to 25 years. In contrast, a partner implementing a project in Zimbabwe told us that it had tried to recruit skilled laborers for food-for-assets activities by providing double food rations but, when this effort proved unsuccessful, had to adjust its budget and project design to reflect skilled labor as an additional cost. In 12 of the 14 development projects we reviewed, partners reported working with the local community by incorporating beneficiary and community leader input when designing food-for-assets activities. While the type of stakeholder input varied across the projects we reviewed, 7 partners noted that community buy-in is one of the most important factors in the success of food-for-assets activities; some also noted that communities selected the communal assets that they viewed as high priority. For example, partner officials implementing a project in Ethiopia stated that community needs are one of the factors that they consider most important when selecting food-for-assets activities. According to partner officials, after their project was approved, they began working directly with villages to identify potential food-for-assets activities. Officials from another implementing partner explained that seeking community input when designing food-for-assets activities is important, because community members are more likely to maintain assets that the community sees as priorities. Implementing partners reported that various challenges affected the design of food-for-assets activities in their Title II development projects. We asked 10 partners that implemented 14 development projects to respond to a checklist of potential challenges, as well as to identify the challenges they considered most important during interviews. Figure 4 shows the challenges that partners identified as affecting food-for-assets activity design. The challenge that the partners most frequently cited as affecting the design of food-for-assets activities was the inability to serve all of the most food-insecure people in a region because of a lack of capacity to operate in the region, government restrictions, or insecurity. Partners citing this challenge reported varying effects on their projects. For example, according to a partner implementing a project in the Democratic Republic of the Congo, ongoing armed conflict affected the design of food-for-assets activities in that, because of security concerns, beneficiaries could not travel away from their homes or at night to work on assets. Officials of this implementing partner cited this as one of the most challenging factors they experienced in designing food-for-assets activities. According to officials implementing a program in Ethiopia, the inability to serve all of the region’s most food-insecure population because of government restrictions was one of the most challenging factors they experienced. These officials noted that the Ethiopian government had determined the number of beneficiaries in each district almost 10 years ago, resulting in the exclusion of many people who are newly eligible to participate and also limiting ration size, because there was no mechanism to increase rations when children were born and family size increased. Ensuring the quality of the assets created through food-for-assets activities, including determining a plan for community maintenance and use was cited as a challenge affecting design for 7 of the projects we reviewed. According to implementing partner officials in Zimbabwe, community preference and capacity to manage the maintenance of the asset are essential to achieving the goals of their activities, and the community must identify and prioritize the assets if they are to be maintained. Additionally, according to USAID officials, if the community is engaged in the design process, it is more likely to maintain assets after the implementing partners’ projects end and the partners leave the area. One partner also noted that a lack of host country involvement was a barrier to determining a plan for community maintenance and use of roads constructed with food-for-assets labor after the project was over. This partner reported that there were no entities to fund the maintenance of these roads in the Democratic Republic of the Congo, even though the partner was seeking to transfer the roads’ maintenance to the local government. USAID cannot systematically measure the performance of food-for-assets activities across all Title II development projects and therefore cannot determine the effectiveness of food-for-assets activities in achieving short-term or longer-term development goals. While USAID uses indicators to assess the overall effectiveness of these development projects, the agency cannot use these indicators to systematically assess the specific effectiveness of food-for-assets activities across all Title II development projects. During our interviews with 10 implementing partners that implemented 14 projects, partners identified several benefits specific to food-for-assets activities, such as developing needed infrastructure, teaching skills to beneficiaries, and achieving short-term increases in food security. They also cited challenges in implementing these activities, such as difficulty in ensuring the sustainability of the assets created as well as weak technical capacity and inadequate resources in host governments and communities. USAID requires implementing partners to report indicators about food-for- assets activities as part of their monitoring process, but USAID cannot systematically use this information to assess the effectiveness of food-for- assets activities separately from that of other activities across Title II development projects. USAID requires partners to monitor project performance and track progress in achieving project results through its standard performance indicators, such as the number of beneficiaries who have participated in a project, as well as project-specific custom performance indicators, such as the number of hectares of land a farmer was able to irrigate as a result of a food-for-assets activity. USAID requires partners to share this information by submitting annual results and other reports. As part of this monitoring, USAID requires partners to collect data through standard indicators, which provide project-wide results and are common across multiple projects. Partners implementing food-for-assets activities report annually, through a standard indicator, on the number of project-wide beneficiaries who have participated in such activities. However, this indicator and USAID’s other standard indicators do not measure the performance of food-for-assets activities, or the effect of these activities on the community, separately from other project activities in a way that allows USAID to compare results for and across projects. For instance, the standard indicators do not address immediate outcomes, such as whether targets for assets constructed were met or the extent to which food-for-assets activities have improved assets in the communities served. According to USAID officials, USAID also requires partners to collect data through custom indicators, which measure results of specific activities within projects. USAID officials stated that USAID works with each implementing partner to identify appropriate custom indicators to measure the effects of specific activities, including food-for-assets activities, on achieving project goals. USAID officials noted that implementing partners’ activity-level reporting on custom indicators, as well as partners’ narrative reports and implementation plans, provide information that allows for oversight of individual projects but make compilation of some data across the Title II portfolio challenging. Since these indicators, narratives, and plans vary among projects, USAID cannot use them to systematically assess the effectiveness of food-for-assets activities across its Title II projects. In contrast to the standard indicators used for food-for-assets activities, standard indicators specific to other types of conditional food aid activities are used to measure the performance of these activities. For example, for interventions to promote maternal and child health and nutrition, USAID uses a set of standard indicators to assess the extent to which various interventions, such as increasing access to improved drinking water and providing antenatal care, are effective in achieving project goals. In addition, WFP uses a community asset score, at the beginning and end of a project, to measure the number of functioning assets created in a community through a food-for-assets activity. Moreover, documents for 10 of 13 WFP projects we reviewed noted performance indicators specific to food-for-assets activities, such as the number of assets completed. According to USAID’s operational policy documented in the Automated Directives System (ADS) chapter 203, performance monitoring should be an ongoing process that indicates whether desired results are occurring and whether development objectives and project outcomes are on track. Additionally, chapter 203 of the ADS states that to ensure accountability, metrics should be matched to meaningful outputs and outcomes that are under the control of the agency. USAID officials told us that a lack of data demonstrating the effectiveness of food-for-assets activities in improving long-term food security represents a significant challenge in development projects involving food for assets. Because USAID has not developed standard performance indicators specific to food for assets, and cannot use its custom indicators to aggregate performance data for food-for-assets activities across projects, the agency cannot systematically assess the results of these activities for all Title II projects that include them. Lacking this information, USAID is unable to determine whether food-for-assets activities are an effective mechanism for decreasing dependence on food aid and increasing food security. Although USAID does not have standardized performance indicators to collect and report performance data specific to food-for-assets activities, implementing partners for the 14 Title II development projects we reviewed cited benefits from these activities. During our interviews with the 10 partners that implemented these 14 projects, partners most frequently cited building infrastructure, teaching skills to beneficiaries, and improving social cohesion among community members as benefits of food-for-assets activities (see fig. 5). As figure 5 shows, implementing partners generally reported that food-for- assets activities led to the creation of infrastructure or physical assets that benefited target communities. During fieldwork in Ethiopia, we also observed benefits of infrastructure created with food for assets. Of the 6 partners that cited increased self-sufficiency of beneficiaries for more than a year as a benefit of food for assets, all reported that their projects also developed needed infrastructure, which may contribute to greater food security. For example, according to a partner implementing a project in Bangladesh, roads constructed through food-for-assets activities help people reach markets to buy and sell food but also allow for increased access to health clinics. Benefits of Infrastructure Constructed through Food-for-Assets Activities in Ethiopia During fieldwork in Ethiopia, we observed small-scale farms that were irrigated with water supplied by dams and irrigation canals constructed through food-for-assets activities. Implementing partner officials highlighted the importance of sequencing the projects to ensure that assets constructed early in the project help support assets planned for the future. For example, in 2005, under a previous U.S. Agency for International Development (USAID) project, this partner began a food-for- assets activity that terraced the upper slopes of the watershed to reduce runoff and recharge the water table. In 2013 and 2014, the partner constructed small-scale dams and irrigation canals to irrigate farmland and increase the variety and production of crops. According to implementing partner officials, when the original project began in 2005, all 2,500 people living in the community were dependent on food aid; as of December 2014, partner officials stated that 75 percent of the community members had graduated out of the program and were no longer dependent on food aid. Teaching beneficiaries skills was commonly cited as a benefit of food-for- assets activities. For example, according to a partner implementing a project in the Democratic Republic of the Congo, working on food-for- assets activities taught beneficiaries the skills needed to maintain the rural access roads that had been constructed after the partner’s project ends. Specifically, the beneficiaries learned how to develop a plan to maintain the roads as well as community-organizing skills needed to keep the community engaged in communal projects. While implementing partners identified benefits of food-for-assets activities, they also noted challenges to implementing these activities. These challenges include weather or other unforeseen events interrupting activities, as well as difficulties in ensuring that assets are maintained and used after projects end. USAID officials noted that achieving long-term benefits of food-for-assets activities often requires maintenance to ensure that the assets remain functional and useful. While ensuring quality control of assets and determining a plan for maintenance were cited as design challenges for 7 projects we reviewed, these challenges may affect implementing partners’ ability to ensure that the assets will function as planned after the food-for-assets activities end. For example, officials implementing a project in the Democratic Republic of the Congo noted that, although the project is using food-for-assets activities to construct feeder roads to improve market access, no local authorities or other entities are available to take responsibility for maintaining the roads after the project ends. As figure 6 shows, partners most frequently cited interruption of food-for- assets activities by weather or other unforeseen events, such as civil conflict, as negatively affecting implementation. For example, because inclement weather can delay or interrupt the construction of assets, partners must take into consideration the seasonal timing of food-for- assets activities. As one implementing partner official explained, conducting such activities in the dry season mitigates the challenge of inclement weather; however, beneficiaries may not need as much food assistance during this season. In areas with armed conflict, partners reported experiencing disruptions because of security concerns. For example, a partner implementing a project in the Democratic Republic of the Congo stated that it had to stop working in certain areas because of the presence of rebel forces. Conditional food aid activities confer benefits, such as creating communal infrastructure, that serve the wider community, and they have the potential to make significant contributions to meeting long-term food security goals. Given that we found most Title II development and emergency projects include conditional food aid activities, an understanding of whether and under what circumstances the use of conditional food aid activities has been effective and appropriate is essential to USAID’s oversight of Title II projects. However, without the ability to identify, and systematically collect information about, the conditional food aid activities being implemented in its Title II program— particularly food-for-assets activities, which our analysis found to be most prevalent—USAID is unable to make effective management decisions about conditional food aid. For example, USAID is not able to determine whether conditional food aid’s effect on food insecurity warrants the additional costs of, for instance, providing building materials for asset construction projects, nor is it able to effectively assess the benefits of these activities separately from other project activities. Moreover, without the ability to systematically assess the effectiveness of these activities across Title II projects, USAID is unable to benefit from lessons learned to improve these activities in the future and to further reduce dependence on food aid and increase food security. To strengthen USAID’s ability to monitor Title II conditional food aid and evaluate food-for-assets activities’ impact on reducing food insecurity, we recommend that the USAID Administrator take the following two actions: establish a mechanism to readily identify all Title II projects that include conditional food aid activities and systematically collect information about the type of conditional activity included in each project and systematically assess the effectiveness of food-for-assets activities in development projects in achieving project goals and objectives. We provided a draft of this report to USAID and WFP for their review. Both provided written comments, which we have reprinted in appendixes IV and V, respectively. USAID also provided technical comments, which we incorporated as appropriate throughout our report. In its written comments, USAID concurred with our recommendations. USAID signaled its intention to establish a mechanism to readily identify all Title II projects that include conditional food aid activities and to collect information about the type of conditional activity in each project. USAID stated that it is already collecting such information for another food assistance program. In addition, USAID agreed that it should assess the effectiveness of food-for-assets activities in development projects in achieving project goals and objectives. USAID added that it has undertaken relevant reviews of the effectiveness and sustainability of Title II development projects and that it is considering expanding evaluations of completed Title II development projects to assess sustainability of results over time. USAID disagreed with statements in our draft report that, because it has not collected data on conditional food aid activities systematically, the agency has limited ability to reliably oversee or monitor programs that use these activities and is not following operation policy that calls for systematic collection of data for monitoring and evaluating program performance. USAID noted that its operational policy also states that collecting more information increases the management burden and cost to collect and analyze this information. Chapter 203 of USAID’s Automated Directives System lists efficiency as a key principle for effective performance monitoring and does not prescribe a specific level of data collection. We revised our draft accordingly. However, our observations and analysis do not support USAID’s position that it is able to reliably oversee or monitor conditional food aid programs. For example, USAID was unable to provide data on the numbers of beneficiaries, funds, or commodities associated with conditional food activities. Moreover, by agreeing to systematically collect data about, and assess the effectiveness of, conditional food aid activities in Title II development projects, USAID acknowledges the importance of this information as well as the feasibility of the recommended actions. In its written comments, WFP noted, among other things, that it found encouraging our findings regarding its capacity to design and implement food for assets, monitor and report results, and achieve both short- and longer-term goals. WFP also commented that food-for-assets activities serve distinct purposes in the two types of emergency operations where WFP uses these activities; we added language to our report to address this comment. WFP did not comment on our recommendations, since they were not directed to WFP. We are sending copies of this report to the appropriate congressional committees; the Secretary of State; the Administrator of USAID; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The U.S. Agency for International Development (USAID) makes most of its Title II emergency awards to the World Food Program (WFP) and bases these awards on WFP funding appeal documents as well as USAID’s analysis of current and emerging crises worldwide. When designing its emergency projects, WFP considers projects addressing long-term crises, chronic poverty, or recurring national disasters to be well suited for food-for-assets activities, according to WFP officials. If food-for- assets activities are to be part of the project, WFP also meets with stakeholders at the village level to identify the assets that are most needed in the community as well as limitations to constructing and maintaining these assets. At the end of this process, WFP country offices develop project proposals—either a protracted relief and recovery operation or an emergency operations project document—outlining the action that is required and also serving as a funding appeal. After WFP releases an appeal, donors, including the United States, determine whether they will provide funding, in-kind commodities, or other resources for the project. According to WFP officials, once WFP has commitments from the donors, it further refines the design of the project to reflect the resources that the donors committed to provide and begins implementation. WFP considers several factors in the design and implementation of food- for-assets activities for emergency projects. WFP officials cited the importance of considering stakeholder and community input and the availability and level of technical expertise, and reported factoring gender considerations into the design and implementation of their food-for-assets activities. In addition, WFP considers input from a variety of stakeholders at the national, regional, and local levels to help it assess food security, and appropriately plan and implement food-for-assets activities. Our review of documents for 13 WFP emergency operations and protracted relief and recovery operations projects with food-for-assets activities found that documents for 10 of the projects noted partnerships with host governments. Documents for 7 of the projects noted partnerships with other implementing partners, such as the Food and Agriculture Organization of the United Nations. During our fieldwork in Djibouti, we visited a newly constructed water catchment where WFP worked with an international development agency that provided technical expertise and machinery and where WFP food-for-assets beneficiaries collected the rocks that were used to build the dam (see fig. 7). In addition, WFP beneficiaries later planted a garden close to the catchment to make use of the collected water, with the Food and Agriculture Organization providing seeds and WFP providing tools and food rations. In addition to considering stakeholder input, WFP considers the availability and level of technical expertise and capacity when designing and implementing food-for-assets activities. In 2014, WFP evaluators found that assets might not be properly constructed or maintained if the needed technical expertise and specialized equipment for the asset exceeded the technical capacity of the community. According to WFP officials, when neither the host government nor the community has the technical expertise or resources to maintain high-technology assets, WFP will either recommend against building the assets or recommend a focus on low-technology assets. Further, WFP integrates gender considerations throughout the planning process for food-for-assets activities, according to WFP officials. According to WFP, this includes acknowledging the different roles, community status, and hardships that men and women have experienced and assessing the potential for exacerbating or addressing these differences through food-for-assets activities. In WFP’s evaluation of projects from 2002 to 2011 in six countries, evaluators found that strategic targeting of assets to women’s needs, creation of gender- sensitive worksites, and consideration for women’s competing demands all affected women’s participation in, and the benefits they derived from, food-for-assets’ activities. Our review of documents for 13 WFP emergency operations projects from fiscal years 2013 and 2014 found that 11 of these projects included targets for women’s participation and that 6 of the 11 projects had targets giving special consideration to gender issues, such as targets for women in leadership roles. WFP identified many benefits to its food-for-assets activities implemented in emergency projects, including benefits similar to those observed by development implementing partners. Additionally, WFP reported a number of risks affecting the design of projects containing food for assets, such as a lack of adequate and timely funding and insecure and unpredictable environments. WFP also reported challenges in implementing food-for-assets activities, including challenges similar to those facing development implementing partners, such as limited technical capacity within communities. WFP found that its food-for-assets activities had helped to develop infrastructure and that food-for-assets activities had built useful assets with both short- and long-term benefits, which in turn improved the beneficiaries’ food security. In its 2014 synthesis of evaluations, WFP evaluators noted that its projects had created assets that helped protect communities from floods and also provided longer term benefits. For example, in Bangladesh, dikes that provided protection from floods were built, and, building these dikes increased the productivity of the land. In addition, WFP evaluators found that in the medium term, assets built in Bangladesh, Ethiopia, Nepal, Senegal, and Uganda had increased land productivity and agricultural production, which in turn enhanced communities’ ability to generate income. Additionally, WFP reported that food-for-assets activities had had a long-term positive impact in creating cohesion among varying populations in Bangladesh, Guatemala, Nepal, and Uganda, some of which had experienced prolonged conflict. WFP reported in its operational documents and Impact Evaluation Synthesis that a number of risks could affect projects containing food for assets, such as a lack of adequate and timely funding, insecure and unpredictable environments, and limited technical expertise. WFP reported for all but 1 of the 13 projects we reviewed that reduced, inadequate, and delayed funding was a key risk to designing and implementing the projects’ activities. For its projects in the Democratic Republic of the Congo, Somalia, and Sudan, WFP noted that life-saving emergency assistance would be prioritized over food for assets when funding was insufficient. In addition, WFP officials in Djibouti told us that in 2014 only 15 percent of planned food-for-assets activities were completed because of a lack of funding. WFP also identified numerous challenges when implementing its food-for- work activities in emergency projects. Some of these challenges were similar to those identified by implementing partners, such as finding humanitarian workers with appropriate technical skills, maintaining assets in the long term, and determining appropriate target populations. WFP evaluators reported on the importance of community and government technical capacity for the proper maintenance of assets, and WFP cited a lack of institutional capacity among host country governments, communities, and other institutions as a risk for 8 of the projects we reviewed. Additionally, WFP evaluators found that limited technical capacity can affect whether an asset functions as intended, because assets are more likely to be maintained when communities and governments have the capacity to appropriately maintain them than when they lack the capacity. WFP evaluators noted that achieving long-term benefits for food-for- assets activities often requires ongoing operations and maintenance to ensure that the asset remains functional and useful. WFP’s 2014 synthesis of evaluations of food-for-assets activities in 2002 through 2011 reported that there was confusion about who would be responsible for maintaining the assets and that plans for maintaining the assets were in place for only a few of the activities. WFP reported that without clarity about maintenance responsibilities, there is a risk that assets will fall into disrepair. Our objectives were to examine (1) the U.S. Agency for International Development’s (USAID) use of conditional food aid through Title II development and emergency projects in fiscal years 2013 and 2014, (2) the factors that implementing partners considered and the challenges they faced when designing food-for-assets activities in development projects, and (3) the extent to which USAID assessed the effectiveness of food-for-assets activities in development projects. To address all three of our objectives, we reviewed Title II project documents and information from fiscal years 2013 and 2014. We focused our review of conditional food aid in Title II emergency projects on the World Food Program (WFP), because it is the largest recipient of USAID’s emergency Title II funding. We met with officials of USAID’s Food for Peace program in Washington, D.C.; officials at the WFP headquarters in Italy and via teleconference; officials at U.S.-based implementing partners’ headquarters in Washington, D.C., or via teleconference; and WFP officials in Chad, Sudan, and Pakistan via teleconference. In addition, we conducted fieldwork in Djibouti, Guatemala, and Ethiopia, meeting with USAID and WFP officials, implementing partner country program staff, and host country government officials, among others. In selecting countries for fieldwork, we considered various factors, including the range of project sizes and types of project (i.e., development or emergency) implemented in the country, the nature of food-for-assets activities in the country, and coverage of multiple implementing partners. For background and context, we obtained information on the advantages and disadvantages of food for assets. We obtained this information by conducting interviews with three subject matter experts in the field of international food aid, selected based on their extensive field research and firsthand knowledge of the topic, as well as a literature review of academic articles related to the design and implementation of food for assets that we selected based on recommendations from the experts we interviewed, searches for articles covering food-for-assets design and implementation, and searches of the bibliographies for those articles we reviewed. In addition, to examine USAID’s use of conditional food aid through Title II development and emergency projects in fiscal years 2013 and 2014—our first objective—we took the following steps. For development projects, we reviewed data from USAID’s Food for Peace Management and Information System (FFPMIS)—USAID’s official program, proposal, and financial management system—from implementing partners’ annual results reports for the 2 fiscal years. We used these data to determine the number of beneficiaries and metric tons of commodities associated with Title II development projects with conditional food aid activities. To assess the reliability of these data, we interviewed Food for Peace and contractor officials who are responsible for maintaining and using the FFPMIS system. To identify any obvious inconsistencies or gaps in the data, we performed basic checks of the data’s reasonableness, checking the FFPMIS data against data provided by agency officials. When we found discrepancies or missing data fields, we brought them to the attention of relevant agency officials and worked with the officials to correct the discrepancies and missing fields. In conducting our reliability assessment, we found two limitations associated with the annual results reports data. The reports do not contain beneficiary or metric tonnage data specific to conditional food aid activities; the most specific data available are by program element. For example, the data we reviewed did not include information about food-for-assets activities but included data for activities that were completed under the agricultural sector capacity program element. USAID officials could not provide data specific to food-for-assets activities through other means. USAID officials do not thoroughly check all of the data reported by implementing partners to ensure accuracy, although they conduct a quality check to assess whether the data are reasonable. These limitations affected our ability to identify the award amounts, beneficiaries, and metric tonnage associated with conditional food aid activities implemented within Title II projects. Instead of gathering beneficiary and metric tonnage information specific to conditional food aid activities, we gathered higher-level data for program elements. On the basis of our interviews with relevant Food for Peace and contractor officials, our review of FFPMIS documentation, and our review and testing of the annual results report data that we received, we determined that the beneficiary and metric tonnage data at the program element level were sufficiently reliable for the purposes of our review. For emergency projects, we used WFP’s standard project report (SPR) data for each Title II emergency project that contained conditional food aid activities in fiscal years 2013 and 2014. These data showed (1) total numbers of beneficiaries for each project, (2) numbers of beneficiaries for each type of conditional food aid activities (i.e., food for assets, school feeding, food for training, and take-home rations), (3) metric tons of commodities and quantities donated in-kind and purchased by WFP with cash donations, and (4) metric tons of U.S. in-kind donations shipped or purchased. We used these data to determine the numbers of beneficiaries and metric tons of U.S. commodities associated with Title II emergency projects with conditional food aid activities. To assess the reliability of the SPR data, we interviewed the WFP officials who gathered the award data for us as well as WFP officials who oversee country program offices’ programmatic and financial reporting. To identify any obvious inconsistencies and gaps in the Title II award data and SPR data, we also performed basic checks of the data’s reasonableness, checking the Title II award data against data provided by USAID officials. When we found discrepancies or missing data fields, we brought them to the attention of relevant agency officials and worked with the officials to correct them. In conducting our reliability assessment, we found three limitations with the SPR data. The SPRs do not contain beneficiary data specific to U.S. donations. For example, the data we reviewed show total numbers of beneficiaries served by WFP—which obtains donations from multiple countries and other entities—rather than by individual country donations. Neither WFP nor USAID officials could provide data specific to WFP’s conditional food aid activities through other means. Additionally, we cannot determine how much of this funding went to the conditional food aid activities as opposed to unconditional food distribution, supplemental distributions, or food or support for the elderly, disabled, or seriously ill. While SPRs contain in-kind metric tonnage data provided by the United States, these data are not specific to conditional food aid activities; they also include general food distribution. Similarly, the project totals for commodities shipped or purchased include general food distribution, locally procured food, and food obtained with cash from the United States and other donors by other means. Additionally, WFP data on U.S. donations of commodities may include commodities for conditional or unconditional assistance. Accordingly, it is not possible to distinguish, on the basis of these data, the metric tonnage of commodities that were distributed strictly for conditional food aid activities. Because WFP beneficiary data may be collected both at the individual level and through estimates based on household rations, the SPR data on beneficiaries may not have been collected consistently. Despite these limitations, we were able to estimate the beneficiaries and metric tonnage associated with Title II emergency projects that included conditional food aid. Lacking data about beneficiaries of U.S. conditional food aid activities, we gathered project-level data. In addition, lacking data about the number of metric tons of food donated by the United States specific to conditional food aid activities, we collected data on food used for general emergency food distribution. Finally, lacking information about the metric tons of food distributed by emergency projects for conditional food aid, we gathered data on food that was shipped from the United States, purchased locally, or otherwise purchased. On the basis of our interviews with relevant Food for Peace and WFP officials, and our review and testing of the award and SPR report data that we received, we determined that the beneficiary and metric tonnage data were sufficiently reliable for the purposes of this report. To examine the factors that partners considered and the challenges they faced when designing food-for-assets activities in Title II development projects, and to determine the extent to which USAID assessed the effectiveness of these food-for-assets activities—our second and third objectives, respectively—we focused on food-for-assets activities as the most prevalent type of conditional food aid activity for both development and emergency projects. For our analysis of development projects, we analyzed USAID data for the 22 Title II projects that were active between fiscal years 2013 and 2014, and that included conditional food aid activities. We analyzed these projects to select the 14 that fit the following criteria: (1) contained food- for-assets activities, (2) were active in fiscal year 2014, and (3) were at least in their second year of implementation. We selected a subset of these 22 projects in the following manner: (1) 2 projects for each of the 4 partners that had multiple active projects, and (2)1 project each for the remaining 6 partners that had only 1 active project. We conducted semistructured interviews with officials of the 10 partners that implemented these 14 projects (see table 3). For partners implementing multiple projects captured in our analysis, we conducted separate interviews with implementing partner staff to discuss each project. The information we obtained through these interviews is not generalizable to all Title II development projects or all USAID development awards. To encourage open and honest discussion, we offered these implementing partners confidentiality and therefore are not naming the partners whose staff we interviewed. During our semistructured interviews covering these 14 projects, we asked the officials from each partner a similar set of questions that focused on the design, implementation, and evaluation of each project. We also provided each partner with four checklists to facilitate collection of uniform information about, respectively, (1) the factors they considered when designing food-for-assets activities in their Title II projects, (2) the challenges they experienced in designing these activities, (3) the benefits of implementing food-for-assets activities as opposed to unconditional food aid, and (4) the challenges they faced in implementing food-for- assets activities in their Title II projects. We asked the partners to complete these checklists prior to being interviewed. In interviews with partner officials, we discussed their responses to the checklists and elicited information about the benefits, factors, and challenges they considered most important to their projects. We analyzed the implementing partners’ responses to both the checklists and the semistructured interviews to determine the prevalence of various factors in designing food-for-assets activities as well as the benefits and challenges that the partners experienced in designing and implementing these activities. We then conducted a content analysis of the semistructured interview responses to determine which factors, challenges, and benefits the partners considered most valuable or important. In addition, we conducted interviews with officials of USAID’s Office of Food for Peace and reviewed USAID documents, including project design and implementation guidance; requests for applications; and partner award documentation, such as annual reports, monitoring indicators, and correspondence with USAID. We compared these data and documents with criteria for data collection and monitoring from USAID’s operational policy, to assess the extent to which USAID can report on the benefits of its food-for-assets activities. To examine the factors that WFP considered when designing and implementing Title II emergency activities, as well as the reported benefits of such activities (see app. I), we reviewed WFP’s emergency operations and protracted relief and recovery operations documents and interviewed WFP country program officials. We selected a judgmental sample of 13 of 60 emergency projects on the basis of the fiscal year of implementation, the presence of a food-for-assets activity, the existence of a reported dollar amount, the availability of project documentation, the project type, and variety in the projects’ geographical location. Table 4 shows the countries and source documents for the 13 Title II emergency projects that we selected for our review. In addition to analyzing operational documents for WFP protracted relief and recovery operations and emergency operations, we conducted telephone interviews with project officials in four WFP field offices: (1) Chad, (2) Djibouti, (3) Pakistan, and (4) Sudan. We selected these projects on the basis of size, the availability of WFP in-country officials, whether active food-for-assets projects were being implemented, and whether we had conducted fieldwork in the country, among other factors. To further analyze what is known about the results of food-for-assets activities, we reviewed WFP’s May 2014 Impact Evaluation Synthesis—a synthesis report of six individual impact evaluations of food-for-assets activities implemented in Bangladesh, Ethiopia, Guatemala, Nepal, Senegal, and Uganda from 2002 to 2011, which we determined was reliable for the purposes of our review. We considered the research design, scope, and methodology of these evaluations and determined that they were reasonable for the purposes of these studies. For example, we considered whether the high-level findings in the summary report represented a fair summary of the individual studies and determined that they did. For example, we found that key challenges and problems with the programs were reported in the evaluation synthesis. We also found that that the benefits in the studies were not overstated in the final evaluation synthesis. However, we noted that a table on the functionality of assets did not appear reliable on the basis of the individual evaluations, and we therefore we did not report on that table. We defined benefits as the positive outcomes resulting from food-for-assets activities, such as improved agricultural production. We defined challenges as difficulties or deficiencies—within or outside WFP’s control—that hindered optimum project implementation and food-for-assets outcomes. We conducted this performance audit from March 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For this report, we focused on the amounts awarded, numbers of beneficiaries served, and amounts of food aid commodities provided for development and emergency projects that included conditional food aid activities rather than for the conditional food aid activities themselves. Almost all development projects, and most emergency projects, that the U.S. Agency for International Development (USAID) funded under Title II of the Food for Peace Act in fiscal years 2013 and 2014 included conditional food aid activities. Of the 60 Title II development projects that USAID funded and implemented through its partners in fiscal years 2013 and 2014, 59 projects included conditional food aid activities. Food for assets was the most prevalent activity in 2013, and maternal and child health care and nutrition was the most prevalent activity in 2014. Figure 8 shows the types and prevalence of conditional food aid activities implemented through Title II development projects during these 2 years. USAID awarded $609.3 million to its implementing partners under Title II in fiscal years 2013 and 2014, most of which supported development projects with conditional food aid activities (see table 5). Awards per project ranged from $60,500 (Niger) to $40.4 million (Ethiopia) in fiscal year 2013 and from $2.0 million (Malawi) to $36.5 million (Ethiopia) in fiscal year 2014. Because most Title II development projects in fiscal years 2013 and 2014 included conditional food aid during this timeframe, the amounts awarded, beneficiaries served, and commodities provided through projects with conditional food aid activities were generally very similar to those for all Title II development projects. According to USAID officials, in fiscal year 2013, implementing partners monetized, or sold, food aid commodities in developing countries to fund development projects in 5 projects: 3 in Bangladesh, 1 in Madagascar, and 1 in Malawi. Of the 59 Title II emergency projects that USAID funded and implemented through the World Food Program (WFP) in fiscal years 2013 and 2014, 52 projects included conditional food aid activities. Food for assets was the most prevalent type of conditional food aid activity in emergency projects, followed by school feeding and food for training. Figure 9 shows the types and prevalence of conditional food aid activities implemented through emergency projects during these 2 years. Of the 52 emergency projects that included conditional food aid activities, 40 were protracted relief and recovery operations—emergency projects that include long-term relief efforts—and 12 were emergency operations—emergency projects that focus on short-term recovery efforts (see table 6). In fiscal years 2013 and 2014, USAID awarded $1.8 billion to WFP emergency projects under Title II, including $1.5 billion to WFP emergency projects with conditional food aid activities (see table 7). Awards per emergency project ranged from $2.5 million (Philippines) to $92.8 million (Ethiopia) in fiscal year 2013 and from $428,700 (Liberia) to $209.8 million (South Sudan) in fiscal year 2014. USAID provided 1.2 million metric tons (50.3 percent) of 2.3 million metric tons of commodities for general food distribution and conditional food aid activities that WFP received directly from donors for its emergency projects during this time frame, including in-kind donations and WFP purchases with cash donations. WFP emergency projects, including those with conditional food aid activities, served the majority of beneficiaries through general food distribution—that is, unconditional food aid that is traditionally provided in emergency projects. As table 8 shows, WFP served 40 percent of beneficiaries in fiscal year 2013 and almost 60 percent of beneficiaries in fiscal year 2014 through general food distribution in these projects. WFP served a smaller percentage of beneficiaries through conditional food aid activities, primarily through school feeding projects, although food for assets was the most frequently used conditional food aid activity. WFP served more beneficiaries through school feeding in Afghanistan, the Democratic Republic of the Congo, and Sudan than in any other countries where it implemented this activity in fiscal year 2013, and in Pakistan, the Democratic Republic of the Congo, and Sudan in fiscal year 2014. After school feeding, WFP served the most beneficiaries through food-for- assets activities. WFP served more beneficiaries through food-for-assets activities in Ethiopia, Kenya, and the Philippines in fiscal year 2013, and in Ethiopia, Kenya, and Burkina Faso in fiscal year 2014, than in any other countries where it implemented this activity. In addition, some beneficiaries participated in multiple conditional and unconditional activities and may be counted in more than one category. For this reason, the sum of the percentages shown in table 8 is greater than 100. 1. Page numbers cited in USAID’s letter refer to a draft version of our report and may not correspond to page numbers in the published report. 2. USAID notes that data on conditional food aid activities currently collected through implementing partners’ narrative reporting, from implementation plans, and for custom indicators allow for robust oversight of individual projects. USAID also notes that its operational policy states that “more information is not necessarily better because it markedly increases the management burden and cost to collect and analyze.” It further notes that the manual compilation of conditional activities across all food assistance programming does not equate to a lack of monitoring, assessment or understanding of conditional food transfers. USAID’s Automated Directives System (ADS) 203.3.2.2 lists efficiency as a key principle for effective performance monitoring and does not prescribe a specific level of data collection. We have revised our draft to ensure that we do not state that the agency has failed to adhere to its operational policy. However, our observations and analysis do not support USAID’s position that its current data collection practices allow for robust oversight of conditional food aid activities. In particular, we found a lack of systematic data that could be used to oversee and learn about these projects across Title II programs. First, although we were ultimately able to determine that almost all of USAID’s Title II projects implemented conditional food aid activities, USAID could not readily identify these projects or the types of activities they included and could not provide data on the resources used for these activities. As a result, USAID officials spent several months manually gathering and revising the data we requested and did not provide finalized data until 8 weeks before our report’s publication. Second, our initial analysis of these data, when they became available, showed them to be incomplete and flawed (for example, including projects that did not have conditional food aid and excluding projects that did) and therefore not useful for systematically monitoring conditional food aid activities in Title II development projects. We were eventually able to estimate these data for projects that included conditional food aid activities. However, USAID was not able to provide us with any data on the numbers of beneficiaries, funds, or commodities associated with conditional food activities. Finally, in its letter, USAID concurs with—and indicates its intent to implement—our recommendation to establish a mechanism to readily identify all Title II development projects that include conditional food aid activities and to collect information about the types of conditional activity included. In addition, USAID notes in its response to this recommendation that it already systematically collects data on conditional activities in food assistance projects funded through the Emergency Food Security Program, suggesting that the agency considers this information important and that taking these actions does not substantially increase management burden or cost. By agreeing to systematically collect data on, and assess the effectiveness of, conditional food aid activities in Title II development projects, USAID acknowledges both the importance and the feasibility of taking these actions to enhance its monitoring and oversight of conditional food aid in its Title II programs. We have added information to clarify USAID’s position on project oversight, such as information that is available in implementing partners’ narrative reporting. 3. We agree that the $2.1 billion in Title II awards in fiscal years 2013 and 2014 funded both conditional and unconditional food aid activities. However, we were not able to identify the amount of funding that went toward conditional activities, because USAID lacks data that would allow us to distinguish these activities from unconditional activities. We agree that the number of beneficiaries served through U.S.-funded Title II emergency projects, including food-for-assets activities, represents a small percentage of these projects’ total beneficiaries. However, this percentage represents emergency projects and does not reflect beneficiary numbers for development projects. We were unable to report similar data on the beneficiaries served through conditional food aid activities in Title II development projects, because USAID did not provide these data. Therefore we reported, as the closest reliable proxy, that 87 percent of USAID Title II funding went toward projects that included conditional food aid activities and that 111 of 119 USAID-funded Title II development and emergency projects included these activities. 4. We acknowledge that general food distributions are often provided to those not able to work in communities and have modified our report accordingly. However, to make effective management decisions about food-for-assets activities, including targeting the appropriate beneficiaries, it is necessary to systematically track these activities’ use and assess their effectiveness across Title II projects. 1. The focus of our performance audit was USAID’s oversight of conditional food aid, and our highlights page (i.e., executive summary) reflects our findings in this regard. Nevertheless, we found both benefits and challenges associated with conditional food aid activities, which we note in our report. 2. To encourage open and honest discussions, we offered to treat as confidential the responses of USAID implementing partner representatives for Title II development projects to our interview questions, and our report therefore does not name these partners. Appendix II lists the criteria we used to select these implementing partners as well as the countries in which the projects we discuss were implemented. 3. We determined that WFP’s 2014 Synthesis of the Evaluation of the Impact of Food for Assets 2002-2011, Lessons for Building Livelihoods Resilience, was sufficiently reliable for our purpose—that is, to analyze benefits and challenges of food-for-assets activities that the document cites. Additionally, throughout our report, we discuss the role of community participation in the design and implementation of food-for-assets activities. 4. We have modified our report to clarify the distinction between the respective roles of food-for-assets activities in WFP’s protracted relief and recovery operations and in its emergency operations. 5. We have added a note to the table to clarify WFP’s definition of school feeding. In addition to the contact named above, Valérie Nowak (Assistant Director), Jaime Allentuck, (Analyst-in-Charge), Ming Chen, Teresa Abruzzo Heger, Nicholas Jepson, Kalinda Glenn-Haley, Martin de Alteriis, Mark Dowling, Kirsten Lauber, Reid Lowe, Katya Rodriguez, Rachel Dunsmoor, and Tina Cheng made key contributions to this report. International Cash-Based Food Assistance: USAID Has Processes for Initial Project Approval but Needs to Strengthen Award Modification and Financial Oversight. GAO-15-760T. Washington, D.C.: July 9, 2015. USAID Farmer-to-Farmer Program: Volunteers Provide Technical Assistance, but Actions Needed to Improve Screening and Monitoring. GAO-15-478. Washington, D.C.: April 30, 2015. International Cash-Based Food Assistance: USAID Has Developed Processes for Initial Project Approval but Should Strengthen Financial Oversight. GAO-15-328. Washington, D.C.: March 26, 2015. International Food Aid: Better Agency Collaboration Needed to Assess and Improve Emergency Food Aid Procurement System. GAO-14-22. Washington, D.C.: March 26, 2014. International Food Aid: Prepositioning Speeds Delivery of Emergency Aid, but Additional Monitoring of Time Frames and Costs Is Needed. GAO-14-277. Washington, D.C.: March 5, 2014. Global Food Security: USAID Is Improving Coordination but Needs to Require Systematic Assessments of Country-Level Risks. GAO-13-809. Washington, D.C.: September 17, 2013. E-supplement GAO-13-815SP. International Food Assistance: Improved Targeting Would Help Enable USAID to Reach Vulnerable Groups. GAO-12-862. Washington, D.C.: September 24, 2012. World Food Program: Stronger Controls Needed in High-Risk Areas. GAO-12-790. Washington, D.C.: September 13, 2012. Farm Bill: Issues to Consider for Reauthorization. GAO-12-338SP. Washington, D.C.: April 24, 2012. International Food Assistance: Funding Development Projects through the Purchase, Shipment, and Sale of U.S. Commodities Is Inefficient and Can Cause Adverse Market Impacts. GAO-11-636. Washington, D.C.: June 23, 2011. International School Feeding: USDA’s Oversight of the McGovern-Dole Food for Education Program Needs Improvement. GAO-11-544. Washington, D.C.: May 19, 2011. International Food Assistance: Better Nutrition and Quality Control Can Further Improve U.S. Food Aid. GAO-11-491. Washington, D.C.: May 12, 2011. International Food Assistance: A U.S. Governmentwide Strategy Could Accelerate Progress toward Global Food Security. GAO-10-212T. Washington, D.C.: October 29, 2009. International Food Assistance: Key Issues for Congressional Oversight. GAO-09-977SP. Washington, D.C.: September 30, 2009. International Food Assistance: USAID Is Taking Actions to Improve Monitoring and Evaluation of Nonemergency Food Aid, but Weaknesses in Planning Could Impede Efforts. GAO-09-980. Washington, D.C.: September 28, 2009. International Food Assistance: Local and Regional Procurement Provides Opportunities to Enhance U.S. Food Aid, but Challenges May Constrain Its Implementation. GAO-09-757T. Washington, D.C.: June 4, 2009. International Food Assistance: Local and Regional Procurement Can Enhance the Efficiency of U.S. Food Aid, but Challenges May Constrain Its Implementation. GAO-09-570. Washington, D.C.: May 29, 2009. International Food Security: Insufficient Efforts by Host Governments and Donors Threaten Progress to Halve Hunger in Sub-Saharan Africa by 2015. GAO-08-680. Washington, D.C.: May 29, 2008. Somalia: Several Challenges Limit U.S. International Stabilization, Humanitarian, and Development Efforts. GAO-08-351. Washington, D.C.: February 19, 2008. Foreign Assistance: Various Challenges Limit the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-905T. Washington, D.C.: May 24, 2007. Foreign Assistance: Various Challenges Impede the Efficiency and Effectiveness of U.S. Food Aid. GAO-07-560. Washington, D.C.: April 13, 2007.
In fiscal year 2014, USAID awarded about $1.3 billion for emergency and development food aid under Title II of the Food for Peace Act. USAID's implementing partners may provide what is known as conditional food aid—that is, food in exchange for beneficiaries' participation in activities intended to support development. For example, food-for-assets activities are intended to address beneficiaries' immediate food needs while building assets to improve longer-term food security. Questions have arisen about whether the dual goals of addressing both immediate and long-term needs may compromise the ability to achieve either goal, underscoring the need to understand conditional food aid. This report examines, among other things, (1) USAID's use of conditional food aid through Title II development and emergency awards in fiscal years 2013 and 2014 and (2) the extent to which USAID has assessed the effectiveness of food-for-assets activities in development projects. GAO analyzed agency and partner documents and interviewed agency and partner officials in Washington, D.C., and in three countries selected on the basis of project type and representing a variety of partners. The U.S. Agency for International Development (USAID) does not track the use of conditional food aid in projects funded under Title II of the Food for Peace Act. However, GAO's comprehensive review of USAID data found that most Title II projects included conditional food aid in fiscal years 2013 and 2014. Despite the prevalence of conditional food aid activities, USAID does not regularly collect data on conditional food aid provided through Title II projects and, as a result, could not readily provide data on the use of these activities in USAID's projects. Without the ability to identify all conditional food aid activities, USAID cannot systematically oversee the projects that include them. According to USAID's operational policy, USAID operating units must strive to continuously learn and improve their approach to achieving results in order to meet development goals. GAO's review of available USAID data for fiscal years 2013 and 2014 found that 111 of 119 Title II development and emergency projects included conditional food aid activities and that funding for these projects totaled $2.1 billion—87 percent of all USAID funding for Title II projects during this period. USAID and its implementing partners implemented various conditional food aid activities, most commonly a type known as food for assets, through these projects (see fig.). Beneficiaries of food-for-assets activities typically must work at constructing community assets, such as roads or irrigation systems, in exchange for food. USAID cannot systematically measure the performance of food-for-assets activities across all Title II development projects and therefore cannot determine the effectiveness of food-for-assets activities in achieving short-term or longer-term development goals. According to USAID's operational policy, measures of program effectiveness should be matched to meaningful outputs under the agency's control. While USAID uses indicators to assess the effectiveness of Title II projects, USAID cannot use these indicators to systematically assess the specific effectiveness of food for assets across its Title II projects. However, during GAO's interviews with 10 implementing partners that implemented 14 projects, partners identified several benefits specific to food-for-assets activities, such as developing needed infrastructure, teaching skills to beneficiaries, and achieving short-term increases in food security. Partners also cited challenges in implementing these activities, such as difficulty in ensuring the sustainability of created assets as well as interruptions resulting from weather and civil conflict. GAO recommends that USAID (1) establish a mechanism to readily identify all Title II programs that include conditional food aid activities and (2) systematically assess the effectiveness of food-for-assets activities in development projects. USAID concurred with the recommendations but disagreed with some aspects of GAO's findings. GAO continues to believe its findings are valid, as discussed in the report.
While the U.S. government’s spending to address international environmental issues and concerns has increased significantly in recent decades, it is not a simple matter to precisely quantify this spending or to identify that portion of the spending attributable to or in some way related to the international agreements to which the United States has become a party. In large part, this is due to the fact that the United States has long been a leader in identifying and attempting to deal with environmental problems that have national, regional, and global significance. Often the United States has taken action in the absence of international accords, while at the same time seeking to mobilize other members of the world community to address such problems in a concerted manner. As a result, many federal agencies established and administer environmental programs under legislative mandates and presidential directives that predate or have no direct connection with particular international agreements. Because spending on these programs would likely occur even in the absence of international environmental accords, agency officials and others tend to view these programs as being related to and supporting these international agreements only indirectly and incidentally. One result of this view is that there is generally no mechanism within these agencies or elsewhere in the federal government for systematically tracking agencies’ spending—other than voluntary or assessed contributions—that relates to and supports the objectives of particular international environmental agreements. Consequently, there is no body of readily available statistical information concerning such spending, either on an agency-by-agency or on a governmentwide basis. Such statistics must be generated, instead, on an ad hoc basis, relying on historical program spending data and the judgment of officials with knowledge of the ways in which their agencies’ program activities relate to and support the objectives of particular international environmental accords. The total spending, exclusive of salaries and overhead, for the five agencies and 12 international environmental agreements covered by our survey amounted to $975.2 million during fiscal years 1993-95. The largest share of the funding support for the 12 agreements covered was related directly or indirectly to the objectives and concerns of the United Nations (U.N.) Framework Convention on Climate Change (the Framework Convention). This share accounted for approximately 71 percent of the total spending. The Framework Convention was followed by the Convention on Biological Diversity, which represented approximately 20 percent of the total spending, and by the International Tropical Timber Agreement, which represented approximately 5 percent of the total. The remaining nine agreements together accounted for only about 4 percent of the total spending. (App. III shows the spending on the 12 agreements covered by our review and describes the general problem addressed by each agreement.) The great majority of the five agencies’ spending in connection with these agreements was devoted to the agencies’ specific programs and projects, which accounted for about 98 percent of the spending. These include scientific research programs and projects that only indirectly and incidentally support one or more of the agreements. Information exchanges and training, the second largest purpose category, accounted for about 2 percent of the total spending. All other purpose categories combined—including sponsorship of conferences, travel to attend meetings and conferences, environmental research not included in specific programs and projects, and other nonspecified uses—accounted for less than 1 percent of the total spending. (App. IV gives a complete breakdown of the total spending by purpose.) The five federal agencies covered by our review exhibited significant differences in spending, both in the total amounts and in the purposes for which the money was spent. They also differed with regard to the agreements the spending supported or to which it related in some way. In large measure, these differences are explained by the differing roles, missions, and activities of the agencies, including whether their spending was primarily for activities carried out within the agency itself or in the form of grants and contracts for activities largely performed outside the agency. (App. V compares the agencies’ spending by international agreement, while app. VI compares the agencies’ spending by major purpose.) The State Department is responsible for coordinating and overseeing the U.S. government’s activities in the international environmental arena. During fiscal years 1993-95, the State Department reported that it spent a total of $886,024 in connection with the international environmental agreements covered by our survey. The largest share of the Department’s funding, about 47 percent, related most closely to the objectives and concerns of the Framework Convention. The State Department expended these funds principally for specific projects and programs, which accounted for about 45 percent of the reported expenditures. USAID is the principal foreign development assistance agency of the U.S. government. In fiscal years 1993-95, USAID obligated a total of $593.54 million in support of the objectives of the agreements covered by our review. The largest portion of this amount, about 56 percent, directly or indirectly supported the objectives of the Framework Convention. USAID reported that the predominant share of its spending, about 99 percent, went for specific projects and programs. DOE is a major participant in the multiagency U.S. Global Change Research Program, the objective of which is the improved prediction of global change, including climate change, as a basis for sustainable development. DOE reported that it obligated a total of $300.12 million in fiscal years 1993-95 in connection with the agreements covered by our review. Nearly all of this amount, approximately 98 percent, was for scientific research programs and projects that in indirect ways relate to the concerns and objectives addressed by the Framework Convention. DOE’s documents show that most of these research activities were carried out by the universities, research institutes, and national laboratories that are the primary recipients of DOE’s grants and contracts. EPA is the federal government’s chief technical and regulatory agency for environmental matters. Its expertise causes it to play an important role in international, as well as domestic, environmental activities and programs. In the 3-year period covered by our review, EPA reported that it spent a total of $77.7 million in direct or indirect support of the objectives and concerns of the environmental agreements that were our focus. The largest portion of this spending, 80 percent, was most closely related to the concerns and objectives of the Framework Convention. With respect to the general purposes for which EPA spent these funds, the largest single share, about 85 percent, went for specific projects and programs. The Department of Commerce, largely through its National Oceanic and Atmospheric Administration, conducts a variety of research and data-gathering activities aimed at providing policymakers with the environmental information needed to make decisions. Conserving and managing the nation’s coastal and marine resources is also part of the Department’s mission. In fiscal years 1993-95, Commerce’s agencies spent a total of $3.03 million in direct or indirect support of the environmental agreements covered by our survey. Commerce devoted the greatest shares of its spending to direct or indirect support of the Protocol of l978 Relating to the International Convention for the Prevention of Pollution From Ships (MARPOL Convention) and the Whaling Convention—about 63 percent and 27 percent, respectively. The single largest portion of these expenditures, 40 percent, was devoted to environmental research. The U.S. government’s spending to address transboundary environmental concerns is not limited to direct spending by federal executive branch agencies. The U.S. government’s financial contributions to UNEP and U.S. funding of the World Bank, regional development banks, and other multilateral financial institutions also support an increasingly significant worldwide investment in programs, projects, and other activities having largely environmental objectives or exhibiting important environmental components. UNEP was established in 1972 by the U.N. General Assembly, following the recommendations of the 1972 U.N. Conference on the Human Environment, to provide a mechanism for international cooperation in matters relating to the environment and to serve as a catalyst, coordinator, and stimulator of environmental action. The broad objectives of UNEP are to maintain a constant watch on the changing “state of the environment” and to promote action plans or projects leading to environmentally sound development. UNEP’s specific environmental priorities currently include (1) the sustainable management and use of natural resources—atmosphere (climate change, ozone depletion, transboundary air pollution), water (freshwater and coastal and marine waters), biodiversity and land (agriculture, deforestation, and desertification); (2) sustainable production and consumption patterns (cleaner production processes, energy efficiency, environmentally sound technologies); (3) a better environment for human health and well-being (toxic chemical and hazardous waste management, urban environment, environmental emergencies); and (4) global trends and the environment (impact of trade; environmental economics; and environmental law, assessment, and information). An Environment Fund, which receives voluntary contributions from the United States and other U.N. member states, is used to finance or partially finance the initiatives of UNEP and cooperative projects with other U.N. bodies, other international organizations, national governments, and nongovernmental organizations. In addition, a number of trust funds established for specific purposes (several of them as a result of treaties or conventions negotiated under UNEP’s auspices) also receive contributions from U.N. member nations. During 1992-95, the United States contributed a total of $74.61 million to UNEP’s Environment Fund to help support a variety of environmental programs and activities ranging from performing environmental assessments to enhancing environmental awareness, assisting in the development of environmental law, building institutional capacities, and fostering technical and regional cooperation. This amount constituted approximately 23 percent of all nations’ support for UNEP’s environmental programs in this period. (See app. VII for a more complete accounting of U.S. participation in UNEP’s environmental programs.) During 1992-95, the United States contributed a total of $7.09 million to the special purpose trust funds administered by UNEP. This amount was slightly more than 11 percent of all nations’ contributions to these trust funds. (See app. VIII for a list of these trust funds and U.S. contributions.) Financing of environmental projects, particularly in developing countries, is also made possible by loans, grants, and other assistance provided by multilateral development banks and affiliated international financial institutions supported by the United States and other governments. In fiscal years 1993-95, the U.S. government provided the World Bank, regional development banks—such as the Inter-American Development Bank—and a variety of other international financial institutions with over $4.7 billion to finance an array of development projects around the world. The World Bank received approximately 70 percent of this amount. (See app. IX for a list of the recipients of this U.S financial support.) While only a minority of the projects funded by these entities could be classified as primarily environmental in nature, a significant proportion of their projects—for example, those pertaining to agriculture and rural development, infrastructure and urban development, and public health— often exhibit an important environmental component or dimension. The World Bank, in a recent “green accounting” of its lending portfolio in the 3 years following the Rio Earth Summit, reported that almost 10 percent—some $6.5 billion—of its cumulative portfolio was devoted to projects with environmental objectives. These have included emergency assistance to Russia to contain and clean up a massive oil spill near the Arctic Circle, support for pollution control and abatement in India, protection of the Baltic Sea from ecological degradation in Latvia, and improvement of solid waste collection and disposal in Lebanon. The funding of environmental activities and projects in the developing world, in particular as they relate to the global environment, is also the mission of the Global Environment Facility (GEF), another international financial institution. Established in 1991 as a pilot program and restructured and refinanced in 1994 through a 4-year replenishment of just over $2 billion of its core trust fund, GEF provides developing countries with grants and low-interest loans for projects and activities that aim to protect the global environment and thereby promote environmentally sound and sustainable development. GEF grants and other forms of funding are available for projects and other activities that address climate change, biological diversity, international waters, and depletion of the ozone layer. Activities addressing land degradation—primarily desertification and deforestation—as they relate to these four focal areas are also eligible for GEF funding. In fiscal years 1993-95, the U.S. government provided a total of $120 million toward its agreed-upon share of GEF’s trust fund replenishment (set at $430 million) to support the institution’s activities (see app. IX). We provided the Departments of State, Commerce, Energy, and the Treasury; the Environmental Protection Agency; the United States Agency for International Development; and the United Nations Environment Program with a draft of this report for their review and comment. All of these agencies and organizations offered minor technical or editorial corrections, which we have incorporated in the report as appropriate. USAID and DOE commented that only a small fraction of their reported spending had a reasonably close and direct relationship to the agreements covered by our review. The bulk of their spending, they stressed, had only an indirect and incidental relationship to these agreements. USAID noted that its reported spending included $15 million obligated to the Montreal Protocol as a result of an earmark on the agency’s appropriation and several thousand dollars to train officials of newly independent states on the compliance procedures of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). The remainder of USAID’s spending was reported primarily because the agency considers the environment to be a key element of economic development. USAID noted that it began including the environment as an integral part of its development programs 20 years ago, long before many of the accords covered by our review were conceived, and would be funding these environment-related programs as a basic part of its mandate even if the accords did not exist. DOE commented, similarly, that only about $2.2 million of its reported spending had a reasonably direct relationship with the U.N. Framework Convention on Climate Change. This amount represents the funds that DOE’s grantees and contractors spent on activities related to assessments of the Intergovernmental Panel on Climate Change, which provides technical advice to the Framework Convention. The remainder of the $300.1 million, DOE noted, represents funding for scientific research that relates to the Framework Convention and other international environmental agreements only indirectly and incidentally. We have added language throughout the report, as appropriate, to make it clear that the spending reported by the agencies that were part of our review includes both expenditures that relate directly to the concerns and objectives of the 12 agreements covered and expenditures that have a much more indirect and incidental relationship to these agreements. We also note that much of the agencies’ spending is legislatively or presidentially mandated and would take place even in the absence of international environmental agreements. These qualifications and caveats, in our estimation, minimize the potential for misunderstandings and misinterpretations of the data contained in our report. (See apps. X and XI for State’s and Commerce’s general concerns about the data contained in this report.) We conducted our review from November 1995 through September 1996 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time we will send copies to the Secretaries of State, Commerce, Energy, and the Treasury; the Administrators of USAID and EPA; and the Director, Office of Management and Budget. Copies will be made available to others on request. Please call me at (202) 512-6111 if you or your staff have any questions. Major contributors to this report are listed in appendix XII. The Chairman, Senate Committee on Foreign Relations, requested that we determine or identify the overall level of federal funding for international environmental activities, including specific programs, treaty negotiations, information exchanges, conferences, and research. Specifically, he asked us to identify the (1) funding of international environmental programs and activities by federal agencies and (2) federal financial support for the environmental programs and activities of specialized agencies of the United Nations and multilateral financial institutions such as the World Bank, regional development banks, and the Global Environment Facility. In subsequent discussions with the requester’s office, it was agreed that the scope of the work should be narrowed from a governmentwide survey of spending related to more than 170 international agreements to a more feasible review of spending by a smaller group of agencies in connection with a select number of agreements. It was agreed that we would determine the spending—exclusive of assessed and voluntary contributions, which were the subject of a separate GAO review requested by the Chairman—of a sample group of five federal agencies in connection with 12 selected international environmental agreements and that we would obtain expenditure data for the 3 most recent fiscal years for which such data were available (fiscal years 1993 through 1995). It was agreed, furthermore, that the agencies would be requested to provide us with spending data whether or not their spending directly resulted from the United States’ having become a party to a particular international environmental agreement, as long as the spending was generally supportive of the purposes and objectives of the agreement in question. Finally, it was agreed that we would obtain limited data on contributions to the United Nations Environment Program, the World Bank and other multilateral development banks, and the Global Environment Facility. After consultations and discussions with representatives of a number of executive branch agencies, we judgmentally selected five agencies from which to gather environmental spending data. These agencies—the Department of State, the United States Agency for International Development (USAID), the Environmental Protection Agency (EPA), the Department of Energy (DOE), and the Department of Commerce—were selected on the basis of such considerations as the Committee’s interest and jurisdiction (the Department of State and USAID), the nature of the agencies’ roles, missions and activities, the magnitude and importance of their environmental programs, and our assessment of the agencies’ ability to respond in a timely manner to our requests for data. Similarly, on the basis of discussions with knowledgeable agency officials, consultations with experts on international environmental matters, and prior GAO work dealing with international environmental agreements, we judgmentally selected 12 agreements in connection with which agencies’ spending data would be sought. (See app. III for a list and description of these agreements.) We used a data collection instrument, in combination with interviews of agency officials and reviews of agencies’ documents, to collect the financial and programmatic information for this report. While we asked the agencies to provide us with data on actual expenditures as opposed to obligations, two agencies—DOE and USAID—said that they would have difficulty doing this in the time frame stipulated and provided us with spending data in the form of obligations instead. Significant amounts of funds were appropriated to one department or agency but spent by another. To avoid double counting, we asked agencies surveyed not to report obligation and expenditure information for funds transferred to other agencies. In instances in which funds were transferred between agencies, the figures in our report show obligations and expenditures as reported by the agency that received the funds. We did not verify the accuracy and completeness of the information provided to us. By federal statute, the Department of State has been assigned the role of coordinating and overseeing the U.S. government’s activities in the international environmental arena. Under section 504 of the Foreign Relations Act of Fiscal Year 1979 (P.L. 95-426), as amended, the Secretary of State is given primary coordination and oversight responsibility for all major science or science and technology agreements and activities between the United States and foreign countries, international organizations, or commissions of which the United States and one or more countries are members. The Department’s role includes recognition, support, assessment, and continuing review of international environment, science, and technology matters to maximize the benefits and minimize the adverse consequences of such matters in the conduct of the nation’s foreign relations. While cooperative international environment, science, and technology activities principally originate in and are implemented by other executive branch departments and agencies of the U.S. government (many of which make significantly greater expenditures for such purposes than does the State Department), they are subject to the Department’s oversight and coordination to ensure consistency with overall U.S. foreign policy. This oversight and coordination responsibility rests primarily with the Department’s Bureau of Oceans and International Environmental and Scientific Affairs, which ensures that the U.S. government’s international environment, science, and technology interests and activities are integrated into U.S. foreign policy and that they receive appropriate consideration, focus, and emphasis in foreign policy deliberations and conclusions. The Bureau of Oceans and International Environmental and Scientific Affairs, assisted by other bureaus and offices, manages the interagency process for authorization to negotiate and conclude cooperation agreements. Under the auspices of the National Security Council, the Bureau chairs a series of Interagency Working Groups to oversee policy deliberations in the fields of oceans, the environment, and science and technology. It also oversees the interagency U.S. Climate Change Country Studies Program, which provides developing countries and countries with economies in transition with financial and technical assistance to address the threat of global climate change. Recently, the Secretary of State elevated the priority of environmental considerations to the highest level within the Department, instructing top officials to integrate environmental and natural resource issues into their planning and daily activities both within the United States and in operations abroad. Asserting that “America’s national interests are inextricably linked with the quality of the earth’s environment” and that “worldwide environmental decay threatens U.S. national prosperity,” the Secretary pledged that the administration would seek further reductions in greenhouse gases (emissions that contribute to global warming) and push for Senate ratification of the U.N. Convention on Biological Diversity and an international agreement known as the Law of the Sea Treaty. As the nation’s principal foreign development assistance agency, USAID provides less-developed nations with substantial funding, primarily in the form of grants and contracts, to assist them in achieving economic growth and at the same time addressing environmental and other practices that impede development. The recipients of USAID’s funding may include other U.S. government agencies, foreign government ministries, international multilateral organizations and programs, nongovernmental organizations, private corporations, expert consultants, universities, and private voluntary groups, among others. Since USAID is primarily a conduit of funding to others, most of the programs and activities it supports are actually implemented outside the agency by nonagency personnel. In recent years, USAID has fundamentally redefined its mission and long-term objectives. Created in 1961 to respond to the threat of communism and to help poorer nations develop and progress, USAID has approached the challenge of development more directly since the end of the Cold War, unconstrained by considerations of superpower competition. In so doing, it has articulated a strategic vision that embraces the concept of sustainable development as a defining principle of its mission. This concept, which was endorsed by the world community at the 1992 United Nations Conference on Development in Rio (the “Earth Summit”) has been defined, in the simplest terms, as development that meets the needs of the present without compromising the ability of future generations to meet their own needs. In 1994, USAID published a document, Strategies For Sustainable Development, that articulated the agency’s long-term objectives, specified their relevance to U.S. interests, described how the objectives would be pursued, and identified implementing mechanisms as well as standards for measuring success. Recognizing the threats that pollution, environmental degradation, resource depletion, and unsustainable population growth posed for international peace, stability, and the economic and political interests of Americans and others, the document placed considerable emphasis on strategies for linking development assistance and protection of the environment. In this major area of emphasis, USAID announced that it would pursue two strategic goals: (1) reducing long-term threats to the global environment, particularly the loss of biodiversity and climate change, and (2) promoting sustainable economic growth locally, nationally, and regionally by addressing environmental, economic, and developmental practices that impede development and are unsustainable. In pursuing an integrated approach to environmental issues, USAID said that it would focus on programs that involve, among other things, energy efficiency improvements, expanded use of renewable energy technologies, and limiting deforestation and the burning of forests and agricultural lands; promoting innovative approaches to the conservation and sustainable use of the planet’s biological diversity at the genetic, species, and ecosystem levels; improving agricultural, industrial, and natural resource management practices that play a central role in environmental degradation; strengthening public policies and institutional capacities to protect the supporting, as resources permit, applied research on key environmental issues, technology transfer, scientific exchanges, the development of human resources, and public education on issues affecting the environment. DOE is a major participant in the U.S. Global Change Research Program, which has as its objective the improved prediction of global change, including climate change, as a basis for sustainable development. Mandated since 1990 by the Global Change Research Act (P.L. 101-606), this multiagency research program focuses on the scientific study of the Earth system and its components, including the oceans, the continents, snow cover and sea ice, and the atmosphere. The program is under the overall direction of the National Science and Technology Council’s Committee on Environment and Natural Resources, which defines national goals for federal investments in environmental and natural resource research and development and provides leadership for the strategic planning for, the coordination of, and the ranking of environmental research and assessment objectives across all federal agencies. Global change research is aimed at improving capabilities for documenting and assessing potential short- and long-term changes in the Earth system and the implications of these changes on climate, surface ultraviolet radiation, land cover, the health of terrestrial and marine ecosystems, and the future availability of resources such as water and food. Global change research assists in the development of improved predictions of extreme events such as floods, droughts, and heat waves, thereby allowing actions to reduce the vulnerability of people and property to natural disasters. The research is organized around a framework of observing and documenting change, understanding processes and consequences, predicting future changes, and assessing options for dealing with change. The large quantities of data generated through these activities require the design and implementation of a sophisticated data- and information-management system to make global change data readily accessible to researchers worldwide. DOE’s Global Change Research Program supports policy needs for scientific information and analyses on greenhouse gases, climate change, and biological effects related to climate change. It also supports the Energy Policy Act of 1992 (P.L. 102-486) and the scientific contribution to international negotiations on climate and provides DOE with the scientific and basic economic tools to evaluate legislative proposals to combat global warming. DOE’s program addresses chiefly the impacts of energy production and use on the global Earth system, primarily through studies of climate response, and includes research in climate modeling, carbon sources and collectors, impacts on vegetation and ecosystems, critical data needs for global change research and early detection of climatic change, and funding for educating and training scientists and researchers in global change. DOE’s program also supports research on technologies and strategies to mitigate the increases in carbon dioxide and other energy-related greenhouse gases, and plays a major role in implementing the President’s Climate Change Action Plan on reducing greenhouse emissions through changes in energy supply and improvements in energy efficiency and conservation. In addition, DOE conducts research related to energy issues, including studies of chemical processes in the atmosphere related to energy production and use; atmospheric studies of the lower atmospheric boundary layer; solid Earth processes related to the formation of energy resources and possible changes in surface interactions; long-term solar interaction with the Earth; basic research in plant and microbial biology; technologies to reduce or replace carbon-based fuels for energy production; and international environmental policy studies. Consistent with its key role in the Global Change Research Program, virtually all of the more than $300 million spent by DOE in fiscal years 1993-95 in connection with the 12 agreements covered by our survey was related in some way to the concerns and objectives addressed by the U.N. Framework Convention on Climate Change. EPA is the nation’s chief technical and regulatory agency for environmental matters. As such, it plays a major role not only in domestic environmental protection activities but in international environmental programs and activities as well. For example, the agency is an important participant in international efforts to address such global environmental concerns as climate change, stratospheric ozone depletion, marine and coastal pollution, and loss of biological diversity. EPA’s international programs also serve important U.S. economic, foreign policy, and security interests. EPA’s environmental expertise qualifies the agency to support U.S. negotiations with foreign governments on international environmental agreements such as the Montreal Protocol, the Environmental Side Agreements to the North American Free Trade Agreement, and the recent agreement under the London Convention to ban the disposal of radioactive and industrial wastes at sea. EPA’s international cooperative programs allow the United States to benefit from scientific and technical breakthroughs and regulatory innovations achieved in other countries, while cooperation on the development of international environmental standards helps eliminate unnecessary barriers to trade. EPA’s capacity-building programs help other, less-advanced nations develop the institutional and human resources capability to deal with their own environmental protection needs while, at the same time, opening commercial opportunities for U.S. businesses. EPA’s Office of International Activities serves as the focal point and catalyst for the agency’s international agenda, providing leadership and coordination on behalf of EPA’s Administrator. This office works with other EPA headquarters program offices, with EPA’s regions and laboratories, and with other federal agencies, international organizations, and foreign governments to mobilize the scientific and technical expertise available throughout the agency in support of U.S. environmental objectives overseas. For example, EPA’s Office of Air and Radiation works closely with the State Department to implement U.S. responsibilities under the Montreal Protocol, including providing resources for the Montreal Protocol’s Multilateral Facilitation Fund, which financially assists developing countries to phase out ozone-depleting chemicals. EPA’s Office of Enforcement and Compliance Assurance provides nations that seek to build an institutional capacity with technical assistance to develop and implement environmental assessment, enforcement, and compliance techniques. The agency’s Office of Policy, Planning and Evaluation supports international efforts to address biodiversity concerns, contributing to work on the economic aspects of biodiversity and the economic incentives for conservation and sustainable use of biodiversity. The office also supports international efforts to assess and improve environmental performance and establish credible measures of environmental quality. The agency’s Office of Research and Development conducts collaborative research with the Peoples’ Republic of China to quantify the effects of air pollutants on children’s lung function, thereby strengthening epidemiological information on the relationship between air pollution and respiratory health. The Department of Commerce is an umbrella organization housing a diverse assortment of agencies, including the Patent and Trademark Office, the International Trade Administration, the Economics and Statistics Administration, the National Telecommunications and Information Administration, the National Oceanic and Atmospheric Administration, and the National Institute for Standards and Technology. The Department’s broad mission is to serve and promote the nation’s international trade, economic growth, and technological advancement. It does this through programs that, among other things, offer assistance and information to increase the international economic competitiveness of American business, seek to prevent unfair foreign trade competition, provide business and government planners with social and economic statistics and analyses, improve the understanding of the Earth’s physical environment and oceanic resources, and provide research and support for the increased use of scientific, engineering, and technological development. The environmental activities and expenditures of the Department are largely carried out by its National Oceanic and Atmospheric Administration (NOAA), encompassing the National Weather Service, the National Environmental Satellite Data and Information Service, the National Ocean Service, the Office of Oceanic and Atmospheric Research, the National Marine Fisheries Service, the Climate and Global Change Program, the Coastal Ocean Program, and the Data and Information Program. NOAA’s activities implement two primary missions: (a) environmental assessment and prediction and (b) environmental stewardship. NOAA’s environmental assessment and prediction goals are to advance short-term weather warning and forecast services, implement seasonal to interannual climate forecasts, predict and assess decadal to centennial change, and promote safe navigation. NOAA’s environmental stewardship goals are to build sustainable fisheries, recover protected species, and sustain healthy coastal ecosystems. These goals are implemented through NOAA’s historic mission and activities to explore, map, and chart the global ocean and its living resources; protect and provide for rational use of living marine resources and their habitats, including protecting marine mammals and endangered species; conduct research and development aimed at providing alternatives to ocean dumping; provide leadership in promoting sound management of the nation’s coastal zone; describe, monitor, and predict conditions in the atmosphere, oceans, Sun, and space environments; issue warnings against impending destructive natural events; assess the consequences of inadvertent environmental modification over several scales of time; and manage and disseminate long-term environmental information. NOAA provides satellite observations of the environment by operating a national environmental satellite system and conducts an integrated program of research and services relating to the lower and upper atmosphere, space environment, and Earth to increase the understanding of the geophysical environment. In addition, it acquires, stores, and disseminates worldwide environmental data through a system of meteorological, oceanographic, geodetic, and seismological data centers. Embracing the concept of sustainable development that has gained currency since the 1992 United Nations Conference on Environment and Development (the “Earth Summit”), NOAA has articulated a strategic vision for the period from 1995 through 2005 in which societal and economic decisions are coupled strongly with a comprehensive understanding of the environment. In accordance with this vision, NOAA has interpreted its mission as describing and predicting changes in the Earth’s environment to provide the environmental information needed to inform policy decisions and conserving and managing the nation’s coastal and marine resources to ensure sustainable economic opportunities. Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) A variety of environmental issues of concern to the parties (U.S., Canada, and Mexico) International Convention for the Prevention of Pollution from Ships (MARPOL) Marine pollution caused by ships Air pollution and acid rain Generation, transportation, and disposal of hazardous wastes $975,225,792($974,608,792 plus $617,000) (Table notes on next page) Commerce Dept. Total spending related to agreement objectives $593,540,000($592,923,000 plus $617,000) Commerce Dept. USAID officials commented that the agency spent a total of $58,000 during fiscal years 1993-95 on travel directly related to the agreements covered by our review, primarily to ensure that the agency’s programs were not compromised and to leverage additional resources from other donors for USAID-initiated sustainable development programs. The balance of USAID reported travel expenditures ($559,000), according to these officials, was for the purpose of sending USAID project managers on site visits to design, oversee, and evaluate the specific projects and programs reported by the agency in connection with the 12 agreements covered by our review. West and Central African Region $2,626,343 (25,000) Totals, fiscal years 1993-95 For fiscal year 1993, the Congress appropriated $30 million for the Global Environment Facility. At the end of the fiscal year, however, this amount was transferred, in accordance with the law, to the United States Agency for International Development (USAID) to support activities associated with GEF and the Global Warming Initiative. The following are GAO’s comments on the Department of State’s letter dated September 12, 1996. 1. We believe that the wording of our report and the many caveats and qualifications contained in footnotes to the accompanying appendixes minimize the possibility that readers might draw erroneous conclusions about the agencies’ spending related to transboundary environmental concerns or about U.S. international environmental policy. Because the requester of our review specified that we should identify spending generally supportive of the purposes and objectives of the 12 treaties included in our review, we did not structure our methodology to distinquish among direct, indirect, and incidental categories of spending—a task that would have greatly increased the difficulty of our work and the resources and time required to perform it. 2. As noted above, we believe that the wording of our report, particularly the discussion in the Background section and the many caveats and qualifications found elsewhere in connection with the data reported by the agencies that responded to our survey, minimizes the possibilities for misunderstanding and misinterpretation. For example, we make it clear that the U.S. government’s identification of particular environmental problems and its decision to take action to confront such problems has led to the creation of many governmental programs completely independent of international environmental agreements. We also note that the U.S. government has often been in the forefront in urging other nations to join with it in taking concerted action to deal with transboundary environmental concerns—appeals which have led to the negotiation and conclusion of a large number of international environmental agreements to which the U.S. and other nations are parties. 3. We have added language to our report clarifying that the data reported relate only to direct expenditures, not to voluntary or assessed contributions to international organizations and programs. The latter were the subject of a separate GAO review requested by the Chairman, Senate Committee on Foreign Relations. The following are GAO’s comments on the letter from the National Oceanic Atmospheric Administration, Department of Commerce, dated September 9, 1996. 1. The data discrepancies referred to have been corrected. 2. Our purpose in citing differences in the agencies’ spending and in noting possible reasons for these difference was to highlight what is evident from the data, not to sound a cautionary note regarding interagency comparisons. However, as NOAA correctly states, varying interpretations of our request for spending data by the five agencies and different decisions taken by these agencies regarding which spending to report provide yet another possible explanation for the differences in spending reported by agencies responding to our survey. Edward Kratzer, Assistant Director Ralph Lowry, Evaluator-in-Charge Denise Dias, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed federal funding for international environmental activities, focusing on: (1) the level of funding related to 12 prominent international environmental agreements for fiscal years (FY) 1993 through 1995; (2) funding provided by the Departments of State and Commerce, the Department of Energy (DOE), the Agency for International Development (AID), and the Environmental Protection Agency (EPA); and (3) federal financial support for the environmental programs and activities of the United Nations (UN), the World Bank, and other multilateral financial institutions. GAO found that; (1) in fiscal years (FY) 1993 through 1995, the Departments of State and Commerce, DOE, AID and EPA spent a combined total of $975.2 million in support of programs and activities related to the 12 international environmental agreements that were covered by GAO's review; (2) the greatest share of this spending, about 71 percent of the total, was related to the objectives of the United Nations Framework Convention on Climate Change; (3) the next largest shares of the spending, about 20 percent and 5 percent, respectively, related to the Convention on Biological Diversity and the International Tropical Timber Agreement; (4) AID accounted for the largest single share, 61 percent of the total spending by the five federal agencies, followed by DOE, which contributed nearly 31 percent of the agencies' spending; (5) the spending by both agencies was primarily related to fulfilling the individual missions of those agencies, and was devoted principally to funding specific projects and programs; (6) in both cases, this spending related more closely to the objectives of the United Nations Framework Convention on Climate Change than to the other international environmental agreements covered by GAO's review; (7) the U.S. government's financial support for the international environmental programs and activities of nonfederal agencies consisted primarily of financial support for the UN Environment Program (UNEP) and for the activities of the World Bank and other multilateral financial institutions, including the Global Environment Facility; (8) from 1992 through 1995, the United States contributed a total of $74.61 million to UNEP's Environment Fund, which represented about 23 percent of all nations' contributions to the fund during that period; (9) from 1992 through 1995, the United States also contributed a total of $7.09 million to the special purpose trust funds administered by UNEP, which was approximately 11 percent of all nations' contributions to these funds in that period; (10) in FY 1993 through 1995, the United States provided a total of $4.73 billion to support the overall activities of multilateral development banks and other international financial institutions; (11) while it is not possible to determine precisely what percentage of this amount went for environmental projects, the World Bank, which received approximately 70 percent of this funding, recently reported that almost 10 percent of its investment portfolio was devoted to projects with primarily environmental objectives; and (12) another recipient of funds for environmental purposes was the Global Environment Facility, which in the same period received U.S. contributions totaling $120 million to provide developing countries with grants and loans, at favorable terms, for projects and activities designed to protect the global environment.
The Border Patrol is the mobile, uniformed, enforcement arm of INS. Its mission is to detect and prevent the smuggling and illegal entry of undocumented aliens into the United States and to apprehend persons found in the United States in violation of immigration laws. With the increase in drug smuggling operations, the Border Patrol has become the primary drug interdiction agency along United States land borders between ports-of-entry. Border Patrol agents perform their duties near and along about 8,000 miles of United States boundaries by land, sea, and air. The Border Patrol is divided into 21 sectors, 9 of which are along the southwest border. Sectors are further subdivided into stations. To stem the growing flow of illegal entry into the country, the Attorney General announced in 1994 a five-part strategy that included strengthening border enforcement. To support this strategy, the Illegal Immigration Reform and Immigrant Responsibility Act of 1996, among other things, required that the Attorney General increase the onboard strength of Border Patrol agents by not less than 1,000 each year for fiscal years 1997 through 2001. Deployment of new agents to particular sectors along the southwest border has generally corresponded with INS’ implementation of its border strategy. However, because the strategy was designed to allow for flexibility in responding to unexpected changes in the flow of illegal immigration, some sectors have received additional agents before the strategy was implemented in their sectors. With increased hiring, the Border Patrol has experienced dramatic growth in recent years. From the end of fiscal year 1994 to the end of fiscal year 1999, the size of the Border Patrol nearly doubled—from 4,226 to 8,351. INS uses a variety of approaches to attract applicants to the Border Patrol, including advertising in magazines and newspapers, on the Internet, in movie theaters, and on billboards; targeting key colleges and universities with degree programs in law enforcement, criminal justice, and police science; attending recruitment events; and visiting military bases to recruit departing military personnel. Although INS has recruited in different parts of the country, it is now focusing its efforts on locations near the southwest border. Those applying to be Border Patrol agents must initially complete a self- screening questionnaire for basic eligibility (i.e., age, education, and citizenship), after which they must successfully complete a multistep hiring process. This process is comprised of a written examination, which includes a Spanish test or an artificial language test designed to measure an applicant’s ability to learn a foreign language (e.g., Spanish); a structured interview with a panel of Border Patrol agents; a medical examination; a drug screening; and a full background investigation. To determine if INS is on track in meeting its hiring goals, we analyzed hiring and attrition data from INS’ Budget Office. We met with Human Resources officials to discuss INS’ latest hiring shortfall projections. Texas—Del Rio, Laredo, and McAllen. Under phase III, INS plans to deploy agents to El Centro, CA, Yuma, AZ, and Marfa, TX. help put INS’ processes and experiences into perspective, we obtained recruiting and hiring information from seven other law enforcement agencies. To provide information on how levels of experience and supervision of Border Patrol agents changed during INS’ hiring build-up, we analyzed INS budget data and compared fiscal year 1994 data (before the hiring build-up began) to fiscal year 1998 data (2 years after the start of the hiring mandate). To analyze experience, we used data on Border Patrol agents’ years of service with INS because INS does not maintain data on agents’ length of service with the Border Patrol. However, agency officials told us that most Border Patrol agents begin their INS careers with the Border Patrol, and it is unusual for other INS personnel to transfer into the Border Patrol. To provide information on supervision, we analyzed changes in the ratio of nonsupervisory agents (GS-5 through GS-11) to first-line supervisory agents (GS-12). Such an analysis provides an indication of how supervision may have changed as more agents have been hired, although it may not provide a complete picture of supervision. INS does not centrally maintain data that would enable us to determine the grade or experience of agents who are actually assigned to work with new agents. To provide information on whether the Border Patrol Academy has kept pace with increased hiring and has the capacity to meet the basic training needs associated with future growth, we visited the Border Patrol Academy and FLETC in Glynco, Georgia, and the Border Patrol’s temporary training facility in Charleston, South Carolina. We met with the Chief of the Border Patrol Academy, instructors, database managers, and FLETC officials. We analyzed Academy databases containing demographic profiles of newly hired agents, final grades, and instructor data. In addition, we reviewed Border Patrol training projections and renovation plans for the Charleston facility and FLETC. We discussed the Charleston facility plans with INS and Border Patrol officials, and we discussed FLETC plans with Treasury officials. To verify the consistency of Border Patrol Academy data, we performed reliability checks on the Academy’s demographic profile, final grade, and instructor databases. We verified that the data entry was complete and that data had not been duplicated. Academy database managers told us that they verify the data entry of all grade data, and that demographic profile data are electronically scanned from trainee-completed answer sheets. We did not verify the accuracy of the grade or instructor data with Academy class records. We conducted our work at INS Headquarters; its training facilities in Glynco, Georgia, and Charleston, South Carolina; and two hiring sessions in San Diego, California, and El Paso, Texas, from September 1998 to September 1999 in accordance with generally accepted government auditing standards. The Department of Justice provided technical comments on a draft of this report, which we incorporated where appropriate. INS was able to increase the onboard strength of the Border Patrol by more than 1,000 agents in the first 2 years of its 5-year hiring goal, but in the third year (fiscal year 1999) it was only able to increase its onboard strength by 369 agents. This resulted in a net shortfall of 594 agents for the 3-year period ending September 30, 1999. Because of attrition, INS would have had to hire 1,757 agents in fiscal year 1999 to meet that year’s hiring goal. As shown in table 1, to account for attrition, INS has had to hire far more than 1,000 agents in each year to meet its hiring goal. During fiscal year 1997, the first year of its goal to increase the Border Patrol’s onboard strength by 1,000 agents, INS actually hired 1,726 agents, which resulted in a net increase of 1,002 agents. In fiscal year 1998, it hired 1,919 agents for a net increase of 1,035. In fiscal year 1999, INS hired 1,126 agents, but because 757 agents left the Border Patrol during the year, the size of the Border Patrol only increased by 369 agents. The Border Patrol’s 9-percent attrition rate for fiscal year 1999 was actually lower than the 13 percent INS originally anticipated. According to an INS official, during fiscal year 1999, some Border Patrol agents applied for, and were accepted to, other INS positions. However, in August 1999, an INS official told us that due to funding difficulties, INS would not be transferring these agents until fiscal year 2000. Had the agents transferred as planned, INS would have faced an even larger shortfall of about 900 Border Patrol agents in fiscal year 1999. The attrition rate among Border Patrol agents rose fairly steadily from fiscal year 1994 through fiscal year 1998, which increased the total number of agents INS needed to hire each year to meet its mandate. As shown in table 1, the annual attrition rate for Border Patrol agents was 5 percent in fiscal year 1994, but by 1998, the rate had risen to 13 percent. Although INS maintains data on categories of attrition, such as retirement and termination, it has limited information on why agents leave the Border Patrol. However, its data do show that in fiscal years 1994 through 1998, almost half of the agents who left the Border Patrol left within their first 10 months of service. Since fiscal year 1996, about one-third of the Border Patrol’s attrition occurred during the initial 19-week training period at the Border Patrol Academy. Appendix I contains additional hiring and attrition data, as well as demographic information on newly hired agents. A major goal of INS’ National Recruitment Program, which was established in 1996, has been to generate enough qualified applicants to meet INS’ hiring goal. The program’s efforts have included tracking advertising sources that generated the greatest applicant response and identifying key schools at which it had past success hiring Border Patrol agents. In the first 2 fiscal years of the program, INS met its hiring goal. However, by November 1998, INS foresaw difficulties in meeting its fiscal year 1999 goal and was projecting a hiring shortfall. Much of the problem was INS’ inability to attract sufficient numbers of eligible applicants and retain qualified recruits through the hiring process. INS has been initiating actions to improve both its recruiting efforts and hiring process. Difficulties finding eligible applicants and the high occurrence of applicants failing or dropping out of the hiring process resulted in INS not being able to meet its fiscal year 1999 hiring goal. Officials believe that the country’s strong economy and job market have contributed significantly to the agency’s hiring troubles. INS officials estimate that, historically, INS has hired about 4 percent of eligible applicants, but it hired only an estimated 2 percent in fiscal year 1999. Thus, officials estimated that INS would have needed to attract about 75,000 eligible applicants—far more than in the past—to meet the agency’s fiscal year 1999 goal. Being able to hire only a small percentage of applicants has clearly contributed to INS’ hiring difficulties, but based on our discussions with other law enforcement agencies, this situation is not unique to the Border Patrol. For example, the Los Angeles Police Department typically hires about 5 percent of its applicants, the Texas Department of Public Safety about 3 percent of its State Trooper applicants, and the U.S. Coast Guard about 1 percent of its applicants, according to officials of these organizations. The U.S. Customs Service only hired 1 percent of its applicants for inspector positions in fiscal year 1999, although 2 percent of the applicants who applied were qualified to be hired. A small percentage of Border Patrol applicants were hired because most failed the written or physical examination, the interview, or the background investigation, or they voluntarily dropped out of the hiring process. However, INS knows little about why some applicants chose to withdraw from the process. The size of the Border Patrol’s applicant pool declines with each stage of the hiring process, but losses are particularly heavy in its early stages. However, in fiscal year 1999, applicant losses were higher throughout the entire process. INS officials estimated that in fiscal year 1996, about half of those who were scheduled to take the written examination actually showed up for the test, and in fiscal years 1997 and 1998, about 60 percent of those scheduled did not report for testing. In contrast, INS estimated about 75 percent of applicants who were scheduled did not report for the written examination in fiscal year 1999. According to an OPM official, a 50- percent no-show rate for initial written testing has been considered typical among government agencies. INS officials do not know why INS’ fiscal year 1999 no-show rate increased. Furthermore, many Border Patrol applicants failed a step of the hiring process in recent years, and this was also true in fiscal year 1999. INS estimated about 72 percent of those who took the written test in fiscal year 1999 failed it, and according to an INS official, failure rates were even higher in the last quarter of the year. In addition, a greater percentage of applicants failed the background investigation in fiscal year 1999. INS estimated that about 15 percent failed the investigation in fiscal year 1998. However, it estimated about 40 percent of applicants failed it in fiscal year 1999. According to an INS official, the more stringent security requirements instituted in May 1998 have increased the background investigation failure rate. INS instituted the tighter requirements to address security concerns. INS officials cite other aspects of the hiring process that may have also contributed to INS’ hiring difficulties. However, their identification of these contributing factors is largely based on anecdotal information from their program staff, and not on any systematic data collection effort. Officials believe that the length of the standard hiring process—-typically 6 months to 1 year—may be a factor in the agency’s inability to hire a greater percentage of Border Patrol applicants. Although most of the other law enforcement agencies we contacted had hiring processes that fell within the range of 5 months to 1 year, recent recruiting literature point out that recruiters are shortening their hiring processes to avoid losing qualified applicants. Other aspects of the hiring process that INS officials believe may have contributed to hiring problems include the out-of-pocket costs applicants incur during the hiring process and in reporting for duty, and a lack of flexibility regarding location and start dates for newly hired agents. Appendix II contains additional information on these and other factors that may contribute to INS’ problems attracting and hiring applicants. To improve its ability to identify and recruit applicants, INS has redirected $2.2 million to enhance its recruiting and hiring initiatives and said it is prepared to redirect additional funds, if needed. However, INS developed these initiatives without adequate data on why it had been unable to retain and hire more Border Patrol applicants. Rather, INS officials said that, in an effort to meet INS’ fiscal year 1999 hiring goal, they based most of their initiatives on their review of the hiring process and past recruitment experiences. INS’ recruiting initiatives include training more than 200 Border Patrol agents to serve as local recruiters and establishing a recruitment coordinator for each Border Patrol sector as part of INS’ overall strategy to increase sector involvement in recruiting and attract more viable recruits. According to an INS official, these recruiting efforts have attracted more applicants, but a greater proportion of recent applicants has been failing the written examination. INS is also considering additional actions that may help recruitment, such as providing hiring bonuses for recruits, and the possibility of raising the full performance level for Border Patrol agents from GS-9 to GS-11. According to INS officials, about 30 percent of the nonsupervisory agents are at the GS-11 level. INS officials believe the current classification standard could support an across-the-board increase to the GS-11 level, but recognize that sufficient GS-11 work must exist and be organized and assigned in a manner that would support the GS-11 level. These changes are being considered as part of a broader effort to bring parity to all INS law enforcement positions, as well as achieve parity with law enforcement positions in other federal agencies. Agency officials hope that raising the full performance level will also make joining the Border Patrol more attractive. Many of INS’ hiring initiatives are geared toward reducing the time it takes to hire an agent, although INS does not have systematic data that confirm its lengthy process has contributed to its hiring difficulties. In addition, to better understand why so many applicants who sign up for the written examination never report for testing, INS plans to conduct telephone surveys of those applicants as part of its hiring initiatives. INS also plans to survey applicants who took the written examination to obtain feedback on the initial steps of its application process. Since April 1999, INS has been asking applicants their reasons for declining offers to join the Border Patrol. However, INS does not have plans to collect data on why it is losing applicants at other stages later in the hiring process. Losing applicants at the later stages is costly to INS because it has already committed Border Patrol agents’ time to conduct interviews, and it has spent about $500 on each medical examination and drug screening, and another $3,000 on each background investigation. (See app. II for additional information on INS’ recruiting and hiring initiatives.) As a result of the increased hiring of Border Patrol agents in recent years, the average years of experience among all Border Patrol agents has declined. This is true among agents assigned to all nine sectors of the southwest border. For example, between fiscal years 1994 and 1998, the percentage of agents stationed along the southwest border with 2 years of experience or less almost tripled, from 14 percent to 39 percent, and the percentage of agents with 3 years of experience or less more than doubled, from 26 percent to 54 percent. With increased hiring, the average number of nonsupervisory agents (GS-5 through GS-11) assigned to each GS-12 supervisory agent has increased in seven of the nine southwest border sectors. For example, in Arizona’s Tucson sector, which experienced the greatest increase, the ratio of nonsupervisory agents to each supervisory agent rose from 8 to 1 in fiscal year 1994 to about 11 to 1 in fiscal year 1998. In Texas’ Marfa sector, which had the lowest ratio of nonsupervisory agents to one supervisory agent, this ratio remained at about 6 to 1 over the same period. INS requires that supervisors in the field supervise at least eight subordinate Border Patrol agents. Agencywide, from fiscal year 1994 to fiscal year 1998, the ratio of nonsupervisory agents to one supervisory agent increased from 7 to 1 to 8 to1. Comparing the ratio of nonsupervisory agents to one supervisory agent from fiscal year 1994 to fiscal year 1998 may provide an indication of how supervision may have changed with increased hiring. However, this analysis may not provide a complete picture of supervision within the Border Patrol. New agents may be assigned to work with GS-9 or GS-11 Field Training Officers who have received special training, or with other nonsupervisory agents. However, even though these agents provide guidance to new agents, they are not officially classified as supervisors. Furthermore, according to Border Patrol officials, new agents may be assigned to work with other nonsupervisory agents who are not Field Training Officers. Because of a lack of data regarding agents who are assigned to work with new agents, and because sectors differ in how they assign new agents, we were unable to measure the level of experience of agents who work with new agents or analyze changes over time. See appendix III for additional analyses comparing grade level and years of service of all Border Patrol agents and those assigned to southwest border sectors, for fiscal years 1994 and 1998. Appendix IV contains a map highlighting the Border Patrol’s southwest border sectors. In anticipation of increased hiring, INS opened a temporary training facility in Charleston, South Carolina, to supplement the existing Border Patrol Training Academy, located at FLETC in Glynco, Georgia. Between these two facilities, the Border Patrol Academy has had the capacity to meet the basic training needs associated with its hiring goal. In fact, because INS was unable to maintain its hiring levels in fiscal year 1999, the Academy has had more than enough capacity. The Academy cancelled 10 training sessions in fiscal year 1999 because fewer agents were hired than planned. Furthermore, none of the 28 sessions it conducted were filled to capacity. As of October 1999, the Academy was planning to train about 1,900 new agents in fiscal year 2000, although it may revise this estimate as the year progresses depending on the number of agents INS is able to hire.According to a Border Patrol official, this training projection should allow the Academy to train new agents hired in fiscal year 2000, any additional agents who must be hired to replace those who leave the Border Patrol during that year, and about 600 agents who must be hired if INS is to make up for the fiscal year 1999 hiring shortfall. INS has renovated parts of the Charleston facility to make it useable for training, and more renovations are planned. Both INS and FLETC officials have reaffirmed their commitment that Charleston should serve as a temporary facility and that FLETC should provide all INS training as soon as it has the capacity to do so. Renovations and expansions at FLETC are also planned. However, the agencies have come to different conclusions about when the Charleston facility can be closed. FLETC’s position is premised on when it will have the capacity to absorb the Border Patrol training that is currently held at the Charleston facility. However, INS believes the facility cannot be closed until FLETC can accommodate all of INS’ training needs, including any that might arise in the future. Appendix V contains additional information on the capacity of the Border Patrol Academy, instructors, and trainees’ class grades. It also contains more information on the future of the Charleston facility. INS has initiatives under way and is considering taking additional actions to attract more Border Patrol applicants and improve its hiring process. The overall effectiveness of these measures cannot be assessed until INS has fully implemented them. However, even if INS is able to increase the number of applicants, shorten the hiring process, or upgrade the full performance level of agents, experience indicates that these actions alone may not ensure that INS can compensate for the hiring shortfall that has occurred and meet any future hiring goals that are established. Too many Border Patrol applicants may still be unable to pass the steps necessary to be hired, or may not maintain their initial interest in the Border Patrol throughout the hiring process. In the face of these challenges, INS is continuing to explore its options. When faced with an impending hiring shortfall for fiscal year 1999, INS officials expanded their recruiting and hiring efforts in an attempt to meet INS’ hiring goal. However, because INS had limited information on why applicants withdrew from the hiring process, it may or may not be addressing all the causes for the shortfall. INS plans to survey applicants who do and do not show up to take the written examination as one step toward helping the agency understand more about its recruiting and hiring problems. At that early written examination stage of the hiring process, INS has spent relatively few funds on any one applicant. As an applicant moves further along in the hiring process, INS invests more of its resources, including making Border Patrol agents available to interview the applicant, and spending $3,000 for a background investigation and almost $500 for a medical examination and drug screening. In addition to surveying those applicants who do not show up for the written test and collecting information from those who decline a job offer, INS could find it informative and cost-effective to learn why some applicants drop out at other stages later in the hiring process. For example, INS could survey applicants, or a sample of applicants, who voluntarily withdraw from the process after passing the interview or the background investigation. We recommend that the INS Commissioner broaden the agency’s plans to survey applicants who register for the written examination by also collecting data on why applicants are withdrawing at other key junctures later in the hiring process. On November 22, 1999, we met with representatives of the Department of Justice, including INS’ Assistant Commissioner for Human Resources and Development, to obtain comments on a draft of this report. They generally agreed with our report and provided technical comments, which we incorporated where appropriate. With respect to our recommendation, they agreed that obtaining additional information on why applicants are withdrawing at other key junctures later in the hiring process would be beneficial. They plan to evaluate the feasibility of implementing the recommendation. Copies of this report are being sent to Senator Orrin G. Hatch and Senator Patrick J. Leahy, Chairman and Ranking Minority Member of the Senate Committee on the Judiciary; Representative Henry J. Hyde and Representative John Conyers, Jr., Chairman and Ranking Minority Member of the House Committee on the Judiciary; and Representative Lamar S. Smith and Representative Sheila Jackson Lee, Chairman and Ranking Minority Member of the House Subcommittee on Immigration and Claims. We will also send copies of this report to the Honorable Janet Reno, the Attorney General; the Honorable Doris Meissner, Commissioner, Immigration and Naturalization Service; the Honorable Lawrence H. Summers, Secretary of the Treasury; and the Honorable Jacob J. Lew, Director, Office of Management and Budget. We will also make copies available to others upon request. The major contributors to this report are acknowledged in appendix VI. If you or your staff have any questions concerning this report, please contact me or James M. Blume, Assistant Director, on (202) 512-8777. This appendix provides an overview, by month, of Border Patrol hiring and attrition in fiscal year 1999; attrition information for fiscal years 1994 through 1998; and a demographic profile of new agents hired from fiscal years 1994 through 1998. The demographic information covers agents’ age, sex, race, prior military and/or law enforcement training experience, and education level. The rate at which INS hired Border Patrol agents fluctuated throughout fiscal year 1999. Table I.1 provides a monthly accounting of hiring and attrition for the year. As the table shows, the number of agents leaving the agency was greater in some months than the number of agents hired. Table I.1: Border Patrol Hiring and Attrition Data, by Month, FY 1999 Nov. Dec. Jan. Feb. Mar. Apr. Jun. Jul. Aug. Sept. 8,017 (28) 8,010 (71) 8,029 (9) Border Patrol annual attrition rates increased from 6 percent in fiscal year 1990 to 9 percent in fiscal year 1999, with some fluctuation in the years between. In fiscal years 1996, 1997, and 1998, attrition rates reached 11 percent, 12 percent, and 13 percent, respectively. As shown in table I.2, close to half of the agents who left the Border Patrol between fiscal years 1994 and 1998 left by the end of their post-Academy training—the period that follows 19 weeks of basic training and concludes 10 months after being hired. Note 1: Academy and post-Academy data provided by the Border Patrol Academy. Total attrition data provided by INS’ Budget Office. GAO calculated the number and percentage of the remaining (“All other”) agents who separated from the Border Patrol. Fiscal year 1999 data were unavailable at the time of our review. Percentages are rounded to the nearest whole number. Note 2: Percentages may not total to 100 due to rounding. Post-Academy training takes place after agents are assigned to the field. Once a week, agents participate in Spanish and law classes that they must pass to stay with the Border Patrol. Demographic profiles of new Border Patrol agents have remained fairly constant during this period of increased hiring, as shown in table I.3. Among the changes that did occur from fiscal years 1994 through 1998 was a decline in the percentage of newly hired Hispanic agents. FY 1994 (n=461) FY 1995 (n=1,005) FY 1996 (n=1,474) FY 1997 (n=1,656) FY 1998 (n=1,901) Age (average) Sex (percent) Race (percent) Asian/Pacific Islander BlackHispanic Native American WhiteOther Note 1: Fiscal year 1999 data were unavailable at the time of our review. Percentages are rounded to the nearest whole number. Note 2: Percentages may not total to 100 due to rounding. As shown in table I.4, the percentages of new agents who had prior military and/or law enforcement training experience declined between fiscal years 1994 and 1995. However, since then, the percentages have remained fairly constant. FY 1994 (percent) (n=461) FY 1995 (percent) (n=1,005) FY 1996 (percent) (n=1,474) FY 1997 (percent) (n=1,656) FY 1998 (percent) (n=1,901) Table I.5 shows the education level of new Border Patrol agents hired from fiscal years 1994 through 1998. One notable change in the education profile of new agents was an increase in the percentage of agents who had a bachelor’s degree when hired. FY 1998 (percent) (n=1,901) 2% 10 2 34 8 38 4 2 Note 1: The following numbers of records were missing in each year: one in fiscal years 1994 and 1996 (0.22 percent and 0.07 percent, respectively, of the totals); five in fiscal year 1997 (0.30 percent of the total); and three in fiscal year 1998 (0.16 percent of the total). Fiscal year 1999 data were unavailable at the time of our review. Percentages are rounded to the nearest whole number. Note 2: Percentages may not total to 100 due to rounding. This appendix provides an overview of INS’ recruitment program, a summary of difficulties INS has faced in trying to meet its hiring goals, and a summary of new initiatives INS is implementing to improve its ability to recruit and hire agents. Since 1996, Border Patrol recruiting efforts have been centralized in INS’ National Recruitment Program. One of the program’s major goals is to generate enough qualified recruits to reach INS’ hiring goals. INS’ national recruitment program includes a variety of activities: Advertising through a variety of mediums, including magazines, newspapers, the Internet, movie theaters, and billboards. Targeting key colleges and universities that have substantial numbers of students graduating with degrees in law enforcement, criminal justice, and police science. Attending recruiting events, such as job fairs and law enforcement officer conferences. Visiting military bases to recruit departing military personnel who have an interest in law enforcement. In addition, to increase the diversity of the Border Patrol’s workforce, INS’ national recruitment program and equal employment opportunity staff work with Border Patrol sectors. Headquarters staff and Border Patrol agents work with interest groups at the local level and participate in conferences, job fairs, and other career events in an effort to attract female and minority applicants. In the past, INS has had success recruiting Border Patrol agents from areas near the southwest border. In fiscal year 1998, INS focused its recruiting efforts on the central and eastern part of the country because it believed it might have exhausted the applicant pool in the southwest. However, recruiting in these other areas was not as successful as INS had hoped. As a result, in fiscal year 1999, INS once again focused its recruiting efforts on locations near the southwest border. INS officials believe a number of factors exist that contribute to INS’ difficulties in recruiting and hiring Border Patrol agents. Although not all are unique to the Border Patrol, they nevertheless present recruiting and hiring challenges, such as difficulty attracting enough eligible applicants, high failure and withdrawal rates during the hiring process, lengthy hiring process, expenses applicants incur, and little flexibility in assigned location and start date. INS does not have data on the extent to which the last three factors affect its recruiting and hiring efforts. INS must attract far more Border Patrol applicants than it intends to hire because most applicants either do not pass all of the required hiring steps or drop out during the process. However, attracting enough eligible applicants has been difficult. INS officials have pointed to the country’s strong economy and job market as a major reason for INS’ hiring problems. They believe the Border Patrol is competing with private and public employers who can offer jobs in better locations and/or with better pay. As shown in table II.1, the number of Border Patrol applicants increased each year through fiscal year 1999, although the number of agents INS hired increased only through fiscal year 1998. INS officials provided data on the number of eligible applicants they attracted each year and the number of agents they hired each year, but they did not have data on the number of each year’s applicant pool that was hired in that same year. However, using the data in table II.1, we estimated that, in fiscal year 1999, INS hired about 2 percent of its eligible applicants, compared to 4 to 5 percent in prior years. Although these percentages are estimates, they nevertheless provide an indication of INS’ need to attract an increasing number of applicants each year. According to an INS official, the agency would have needed to attract about 75,000 eligible applicants in fiscal year 1999 if it was to meet its goal to increase the Border Patrol’s onboard strength by 1,000 agents. The vast majority of applicants are not being hired as Border Patrol agents—they either fail one of the steps in the hiring process, or they choose to withdraw. Although this is not unique to the Border Patrol and other law enforcement agencies also hire few of their applicants, high dropout rates have made it difficult for INS to meet its hiring goals. To identify trends in the hiring process and to estimate the number of eligible applicants it would need to attract to increase the onboard strength by 1,000 agents each year, INS developed estimated dropout and failure rates for recent years. According to INS’ estimates: Seventy-five percent of eligible applicants did not show up for the written examination in fiscal year 1999. The percentage of applicants who did not report for testing increased most years since fiscal year 1996, when INS estimated that 54 percent of eligible applicants did not show up for the written examination. Thirty percent of applicants who passed the written examination in fiscal year 1999 did not return for their interview. In fiscal year 1998, 43 percent did not return for their interview; in fiscal years 1996 and 1997, about half the applicants did not return. Forty percent of applicants who passed the interview in fiscal year 1999 failed their background investigation. In fiscal year 1998, 15 percent of applicants failed the investigation. Sixteen percent of applicants who passed the background investigation in fiscal year 1999 failed or did not show up for the medical examination. In fiscal year 1998, 18 percent failed or did not show up for the examination. Six percent of those who received a final offer in fiscal year 1999 declined it. In fiscal year 1998, 10 percent declined a final offer. According to an INS hiring official, it has typically taken 6 months to 1 year to hire a Border Patrol agent under INS’ standard hiring process. Other law enforcement agencies have a similarly long hiring process, but because Border Patrol’s full performance salary level is low compared to some agencies, INS officials believe its applicants may not be willing to wait 6 months to a year for a Border Patrol job offer. Under the standard hiring process, most steps or tests occur sequentially, with various amounts of time elapsing between each. According to an INS official, scheduling the interview and completing the background investigation when suitability issues arise are the main factors affecting the time it takes to hire an agent. Other factors that can increase the time it takes are health issues or a lack of sufficient information provided by the applicant. Prior to November 1998, INS’ Special Examining Unit oversaw the agency’s hiring functions. However, this unit did not closely monitor the time it took to move an applicant through each stage of the hiring process. Without appropriate monitoring of the hiring process, INS was limited in its ability to identify potential inefficiencies and, thus, the process was longer than necessary. For example, INS officials told us that under INS’ contract with OPM to schedule and provide the written examination, OPM must offer the examination within 5 weeks of an applicant’s registration. However, according to an INS official, the Special Examining Unit was not monitoring this step, and OPM was taking 6 weeks or more to provide written testing. In addition, the Special Examining Unit would rely on INS’ three administrative centers to schedule applicant interviews, and the centers, in turn, would either schedule the interviews themselves, or turn the task over to the sectors. According to an INS official, this scheduling process was averaging 8 weeks or more. INS officials said that the lack of central oversight allowed for chronic delays that significantly added to the total time it took to hire an agent. INS also experienced delays in scheduling preemployment medical examinations for applicants. INS relies on an outside contractor for applicants’ medical examinations. However, according to one INS official, the contractor was slow in assigning applicants to clinics and did not have a tracking system in place to identify delays. In some cases, it was taking 90 days from the time applicants passed their interview to the time they received the results of their medical examination. According to an INS official, at INS’ insistence, the contractor has since established a self- monitoring system to avoid delays and identify situations requiring special attention. In an attempt to shorten the hiring process and attract a greater number of applicants, INS began conducting expedited hiring sessions in fiscal year 1996. These expedited sessions, which INS offered in addition to the standard hiring process, were scheduled periodically in higher-activity locations. They allowed applicants to complete the written examination, interview, medical examination, drug screening, and fingerprinting over the course of 2 days. In fiscal year 1997, INS began arranging for media attention in the areas where expedited sessions would be held to heighten awareness of the Border Patrol and increase the number of potential applicants. Initially, this strategy was fairly successful both in expediting the hiring process—typically 2 to 3 months were saved—and increasing the number of agents hired. In fiscal year 1997, 24 percent of all agents hired were processed through expedited hiring sessions, and 4 percent of those who registered for the expedited sessions were hired. But subsequently, these sessions produced lower-than-expected turnouts and diminished results. In fiscal year 1998, only 10 percent of all agents hired resulted from the expedited process and 2 percent of those who registered for the expedited sessions were hired, according to INS estimates. According to an INS official, the expedited hiring sessions in fiscal year 1999 also produced disappointing turnouts and results. Because of poor results and the substantial costs associated with administering the expedited sessions, INS decided to discontinue them. INS officials did not know why the expedited hiring sessions held in fiscal years 1998 and 1999 yielded disappointing results. INS held its last such session in May 1999. Table II.2 shows the results, as of July 14, 1999, of the last three expedited hiring sessions INS held. As the expedited hiring process typically takes 3 to 9 months, additional agents may be hired from these sessions. Tucson Jan. 1999 2,900 (100%) New York Mar. 1999 1,553 (100%) San Diego May 1999 1,430 (100%) Scheduled for expedited hiring sessions Took written examination Passed written examination Passed interview Still being processed Security/medical issues Accepted final offer Hired497 (17%) 143 (5%) 136 (5%) 81 (3%) 64 (2%) 14 (< 1%) 32 (1%) 235 (15%) 63 (4%) 54 (3%) 43 (3%) 38 (2%) 4 (< 1%) 7 (< 1%) INS believes the expenses that applicants incur during the hiring process serve as a deterrent and, thus, have contributed to the agency’s hiring difficulties. According to INS, Border Patrol applicants can spend up to $1,500 of their own money travelling to the written examination site and the interview site, and reporting for duty. Recruits must get to their duty station at their own expense, and once there, typically incur the cost of several nights at a hotel before going to the Border Patrol Academy. INS officials believe that INS’ lack of flexibility in assigning location and start date may have contributed to some applicants turning down Border Patrol offers in the past. They explained that INS provided newly hired agents with little choice in the location to which they were assigned, and provided short notice for new agents to report for duty. Traditionally, INS offered newly hired Border Patrol agents little choice in their first duty station, in part, because the Border Patrol wanted new agents assigned to stations outside their home state. According to a 1989 INS study, new agents were not assigned to their home state out of concern that those agents might be more susceptible to bribery and corruption. However, neither INS nor the Border Patrol had data to support this conclusion, and the study strongly recommended that the practice be eliminated. According to a Border Patrol Academy official, as hiring problems developed and filling training classes became a problem, INS began giving newly hired agents relatively little time to report for duty and training. Officials told us they believed that providing short notice might have been a factor in Border Patrol recruits turning down job offers. The Border Patrol Academy conducted a survey of 10 training classes that took place in fiscal year 1998 and found that new hires received an average of 14 days’ notice to report for duty. The average notice time for new hires in one of the 10 classes was 7 days, and 1 agent said he received as little as 1 day’s notice. Traditionally, INS had tried to give new hires 30 days’ notice to make necessary personal arrangements. Agency officials told us that 30 days’ notice seems appropriate, since agents must report for a 19-week training program in either Georgia or South Carolina within the first days of coming on duty, and training is typically followed by relocation. In the face of INS’ hiring difficulties, the INS Commissioner convened a working group in January 1999 to review INS’ recruiting plan and hiring process. The group made changes to both processes and has plans for further short- and long-term changes that it expects will improve INS’ ability to recruit and hire Border Patrol agents. The Commissioner has redirected $2.2 million to implementing these initiatives and is willing to redirect more funds if needed. The $2.2 million became available after INS cancelled 10 fiscal year 1999 training classes due to insufficient numbers of new hires. The following new recruiting initiatives are intended to increase Border Patrol sectors’ involvement in the recruiting process and increase the number of people interested in the Border Patrol: training over 200 Border Patrol agents as recruiters, establishing recruitment coordinators in each sector, establishing a toll-free job information line, and considering future recruiting bonuses. Most of the following hiring initiatives are intended to reduce the time of the entire hiring process, from the time the applicant signs up to take the written examination, to the time INS makes the applicant a final job offer: conducting written tests sooner, scheduling interviews centrally, monitoring the scheduling of medical examinations, offering “compressed testing” at six locations, surveying applicants who did and did not show up for the written test, allowing more choice in job locations among the southwest border sectors, allowing more flexibility in start dates. The working group developed a series of recruiting initiatives aimed at increasing local outreach and heightening local awareness of the Border Patrol. Even before INS developed these new initiatives, it had significantly increased the number of activities in which its National Recruitment Program was involved during fiscal year 1999. One of the major new initiatives involves using Border Patrol agents as recruiters. INS contracted with the same firm that trains U.S. Marine Corps recruiters to train Border Patrol agents as recruiters. In June and July 1999, the contractor provided such training to more than 200 Border Patrol agents. INS also established recruitment coordinators for each Border Patrol sector, who have developed local recruiting plans for the Border Patrol recruiters to implement. These local plans include universities, colleges, and community colleges; military bases and facilities; and local events. According to an INS official, these plans involve increased emphasis at the local level, including more recruiting at community colleges. In May 1999, INS established a toll-free job information line for potential Border Patrol applicants. The information line provides the caller with the following information: how to apply, answers to frequently asked questions, duties and qualifications, physical requirements, and an overview of the hiring process. According to an October 1999 INS report, the toll-free line was averaging more than 2,000 calls per week. As part of its initiatives, INS officials are also considering providing recruiting bonuses. Such a bonus would take the form of a “signing bonus” for newly hired agents. INS officials have begun implementing a set of hiring initiatives aimed at retaining more applicants through the hiring process so that, in the end, they hire a greater percentage of applicants. Several of the initiatives are focused on reducing the time it takes for an applicant to move through the hiring process because officials believe the length of the process has hurt INS’ ability to hire more Border Patrol agents. INS’ transfer of Border Patrol hiring functions to its National Hiring Center in Twin Cities, Minnesota, in early fiscal year 1999, has improved monitoring of the hiring process. The hiring initiatives include a goal to reduce INS’ overall standard hiring process—from the point an applicant is scheduled for the written examination through the Telephone Application Processing System to the point an applicant receives a final job offer—by at least 1 to 2 months. Thus, an applicant could move through the hiring process in 4 to 5 months if no issues complicate the applicant’s medical examination or background investigation. One focus of INS’ initiatives has been to shorten the time from when an applicant is first scheduled for the written examination through the Telephone Application Processing System to the time the applicant takes the examination. INS’ National Hiring Center has been tracking OPM’s efforts and working with OPM to shorten this step by at least 1 week. INS also expects to reduce the hiring process by 1 to 4 weeks through the centralized scheduling of applicant interviews. Under the new initiatives, INS’ National Hiring Center is working directly with the sectors to schedule interviews, thus eliminating INS administrative centers from the process. The National Hiring Center has begun monitoring the time it takes sectors to schedule interviews and is producing internal reports that identify sectors that are lagging behind. The National Hiring Center is now also involved in the process of referring applicants to INS medical contractors for the required medical examination. With the center’s involvement, and its electronic tracking of this step, officials anticipate they can cut in half—from 90 to 45 days—the time between an applicant passing the interview and receiving the medical examination results. In addition to its standard hiring process, INS is now offering “compressed testing” to reduce the time it takes to hire an agent. INS is conducting compressed testing at six locations, five of which are near the southwest border, that collectively account for more than half of the past Border Patrol applicants. Compressed testing will allow the written examination and interview to take place, independent of each other, at these locations at 2-week intervals. Officials hope that compressed testing will reduce the entire hiring process to 3 to 4 months in cases where no issues complicate the applicant’s medical examination or background investigation. In a further effort to improve hiring, INS has contracted with a firm to conduct telephone surveys of applicants who take the written examination, as well as those who are scheduled to take the written examination, but do not report for testing. The survey of applicants who take the examination will obtain feedback on the initial part of the application process, such as the amount of time that passed between applying to take the written examination and taking the examination. The survey of applicants who do not report for testing will ask for the applicants’ reasons for not reporting. Officials hope these efforts will help them improve the hiring process and increase their understanding about why potential recruits seem to lose interest before the hiring process really begins. As of September 1999, the development of the two surveys was well under way. Hiring initiatives also include allowing recruits a choice of location among the southwest border sectors to which they can be assigned in the hope that more recruits will accept job offers. INS has taken the position that the Border Patrol needs to be more flexible on this matter if hiring is to improve, and it is asking recruits to identify two preferences out of four general geographic locations along the southwest border. Even before the new initiatives, the Border Patrol agreed to begin allowing more flexibility, and this has increased under the new initiatives. Although new agents are not assigned to their home station, they can now be assigned to their home state or home sector. As previously discussed, INS officials recognize that providing recruits with little notice to report for training may have contributed to job declinations or resignations during basic training. INS officials have the goal of providing recruits with 30 days’ notice to report for duty. According to a National Hiring Center official, this goal is not always achieved, but staff work directly with recruits to arrange as much notice as possible and find a mutually acceptable reporting date. This appendix provides information on how the general composition of the Border Patrol has changed as it has increased in size. As the relative number of agents within each grade level has changed, so too has the average level of experience among agents. The average years of service among agents has declined both agencywide and in the sectors along the southwest border. Also affected by the Border Patrol’s rapid growth has been the average number of nonsupervisory agents assigned to each GS-12 supervisory agent. Between fiscal years 1994 and 1998, the size of the Border Patrol increased dramatically, causing a considerable shift in agents’ average years of experience, both agencywide and along the southwest border. At the start of fiscal year 1999, 92 percent of all Border Patrol agents were assigned to the nine sectors along the southwest border. (See app. IV for a map showing the southwest border sectors.) Table III.1 provides data on how the number and percentage of agents at each grade level in the southwest border sectors changed from fiscal year 1994 to fiscal year 1998. Almost all of the nine sectors experienced notable increases in the number of agents onboard between these years, with one sector—Tucson—more than tripling the size of its workforce. More significantly, because all new agents are deployed to the southwest border after completing basic training, the relative number of GS-5 and GS-7 agents in these sectors increased dramatically. Agencywide, the percentage of relatively inexperienced Border Patrol agents increased significantly between fiscal year 1994 and fiscal year 1998. As shown in table III.2, the percentage of agents with 2 years or less experience almost tripled agencywide, from 12 percent to 35 percent. In contrast, the percentage of agents with 5 or more years of service declined, from 74 percent of all agents to 40 percent. Table III.3 shows changes in the level of experience of agents assigned to the southwest border. For example, between fiscal year 1994 and fiscal year 1998, the percentage of agents with 3 years of service or less more than doubled, from 26 percent to 54 percent. In contrast, the percentage of agents with 5 or more years of experience declined, from 70 percent in fiscal year 1994 to 36 percent in fiscal year 1998. As table III.4 demonstrates, between fiscal year 1994 and fiscal year 1998, all nine of the southwest border sectors saw increases in the percentage of relatively inexperienced agents, with some sectors experiencing dramatic increases. For example, in fiscal year 1994, 2 percent of the agents at the El Centro sector had 2 years of experience or less but, by fiscal year 1998, 59 percent of the agents had 2 years of experience or less. The McAllen sector also experienced dramatic increases—only 1 percent of its agents in fiscal year 1994 had 2 years of experience or less but, by fiscal year 1998, 54 percent of its agents had 2 years of experience or less. The percentage of agents in the Tucson sector with 3 years of experience or less increased from 18 percent in fiscal year 1994 to 64 percent by fiscal year 1998. As a result of the increased hiring of Border Patrol agents, the ratio of nonsupervisory agents (GS-5 through GS-11) to one GS-12 supervisory agent increased across the Border Patrol—from 7 to 1 in fiscal year 1994 to 8 to 1 in fiscal year 1998. The ratio of nonsupervisory agents assigned to one supervisory agent also increased among the southwest border sectors, from 8 to 1 to 9.2 to 1. Almost all of the nine southwest border sectors saw the span of supervision increase. As table III.5 illustrates, this increase varied among the sectors. At one extreme, in the Tucson sector, the ratio of nonsupervisory agents to one supervisory agent increased from 8 to 1 to 11.2 to 1. In contrast, in the El Paso sector, the ratio of nonsupervisory agents to one supervisory agent decreased between these years, from 9.5 to 1 to 8.4 to 1. New Border Patrol agents are sent to the Border Patrol Academy for a 19- week basic training program within days of reporting for duty at their assigned sectors. The basic training program covers six subject areas: (1) Spanish, (2) law, (3) operations, (4) physical training, (5) firearms, and (6) driver training, and agents must pass all subjects to graduate. As shown in table V.1, the number of agents who received basic training has grown substantially since fiscal year 1994. Table V.1: Border Patrol Agents Receiving Basic Training, FYs 1994 Through 1999 (number) (percent) (number) (percent) Fiscal year 1999 data reflect only classes that had graduated as of September 30, 1999. Table V.I also shows the number and percentage of agents who did not graduate each year. Agents who do not graduate are those who (1) fail to receive a passing grade of 70 percent in any subject area and are, thus, terminated; (2) are injured during training and receive COP; or (3) resign. The Academy has developed a training projection for fiscal years 2001 through 2005 for planning purposes. Table V.2 highlights the Academy’s 5- year training projection, which calls for a gradually increasing number of new agents each fiscal year. The Academy relies on both permanent and detailed instructors to provide basic training. Detailed instructors are Border Patrol agents—GS-9 or above—who are recruited from the field to work as instructors on a temporary basis—usually for 1 or 2 of the 19-week sessions. Table V.3 shows the number of Border Patrol instructors assigned to the Academy for fiscal years 1994 through 1998. As the number of trainees has increased, the Academy has increasingly relied on detailed instructors. In fiscal year 1995, the Academy more than quadrupled the number of detailed instructors onboard. In fiscal year 1998, more than 75 percent of instructors who taught at the Academy were detailed from the field. Because the Academy could not provide us with data on all its detailed instructors, these percentages actually underrepresent the Academy’s reliance on detailed instructors. Trainees’ overall grade averages have remained relatively constant since fiscal year 1994, as shown in table V.4, despite the large influx of trainees and detailed instructors. FY 1994(percent) FY 1995(percent) FY 1996 (percent) FY 1997 (percent) FY 1998 (percent) In fiscal year 1996, INS expanded its existing Border Patrol training capacity by opening a temporary, satellite training facility at a former naval station in Charleston, South Carolina. To make the facility suitable for training, INS spent more than $5 million constructing new firing and driving ranges and reconfiguring existing structures into classrooms and dormitories, as well as a fitness center. In fiscal years 1998 and 1999, INS received about $16 million for additional facility renovations, including the consolidation of management, instructor, and administrative offices into a single building, and the development of an “after-hours” study facility and an athletic center. INS and FLETC officials have different views on how long the Charleston facility will need to remain open to provide training. When INS began using the facility in fiscal year 1996, it anticipated closing the Charleston facility once FLETC had the capacity to accommodate all of INS’ training needs. At that time, both FLETC and INS expected the facility to operate for about 3 years. However, in April 1999, FLETC indicated that it would not be ready to assume the Charleston facility’s training load until fiscal year 2001. In October 1999, a FLETC official told us that FLETC had readjusted its April 1999 estimate to the end of fiscal year 2004, or earlier if Border Patrol hiring is less than expected or if funds are appropriated sooner. He explained that the agency’s estimate is based on its ability to reabsorb all Border Patrol training currently held at the Charleston facility. In October 1999, an INS official told us that INS expected the Charleston facility could be closed sometime between fiscal years 2004 and 2006. INS’ estimate is premised on FLETC’s ability to accommodate all of INS’ training needs, which are dependent on INS’ future hiring requirements and its ability to meet those requirements. Lori A. Weiss Barbara A. Guffy Jennifer Y. Kim Marianne C. Cantwell David P. Alexander Michelle A. Sager The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on Border Patrol hiring, focusing on: (1) the Immigration and Naturalization Service's (INS) ability to meet its 5-year goal to increase the Border Patrol's onboard strength by 1,000 agents each year from fiscal years (FY) 1997 through 2001; (2) INS' efforts to improve its recruiting efforts and hiring process; (3) changes in the years of experience and level of supervision of Border Patrol agents during INS' increased hiring; and (4) the ability of INS' basic training program to support the pace at which Border Patrol agents have been hired, including whether the Border Patrol Academy anticipates having the capacity to meet future growth. GAO noted that: (1) INS' recruitment program yielded a net increase of 1,002 Border Patrol agents in FY 1997 and a net increase of 1,035 agents in FY 1998 after accounting for attrition; (2) although INS succeeded in increasing the Border Patrol's onboard strength by 1,000 agents each year, it saw a net increase of only 369 agents in FY 1999 because it was unable to recruit enough qualified applicants and retain them through the hiring process; (3) for the 3-year period ending September 30, 1999, INS experienced a net hiring shortfall of 594 agents; (4) INS has had difficulties attracting and retaining qualified applicants; (5) few individuals who apply to the Border Patrol successfully complete the application process; (6) some fail to pass the rigorous entry examination, medical examination, or background investigation, while others withdraw from the process; (7) in FY 1999, failure and drop-out rates were higher than in the past; (8) to address its hiring problems, INS has redirected $2.2 million to enhance its recruitment program, which includes: (a) initiatives to increase Border Patrol agents' involvement in recruitment and fine-tuning INS' hiring process; (b) surveying applicants for reasons why they register for the written examination but do not report for testing to find out their reasons for not reporting, as well as those who do report for testing for their views on the initial part of the hiring process; and (c) asking applicants their reasons for declining Border Patrol job offers; (9) however, INS does not have plans to survey applicants who voluntarily withdraw at other stages later in the process; (10) as hiring has increased, the average experience level of Border Patrol agents has declined agencywide, as well as along the southwest border; (11) the percentage of agents along the southwest border with 2 years of experience or less almost tripled--from 14 percent to 39 percent--between FY 1994 and FY 1998; (12) during the same period, 7 southwest border sectors experienced some increase in the average number of nonsupervisory agents assigned to each supervisory agent; (13) the Tucson sector experienced the greatest increase, with its ratio of nonsupervisory agents to one supervisory agent rising from 8 to 1 in FY 1994 to about 11 to 1 in FY 1998; and (14) by relying on a temporary training facility in Charleston, South Carolina since 1996, the Border Patrol Academy has been able to provide newly hired agents with required training and, according to a Border Patrol official, is prepared to meet the training needs associated with future growth.
Under the act that created it, the Corporation has a diverse set of responsibilities. These responsibilities include administering national service programs authorized under previous legislation, funding training and service clearinghouses, and undertaking activities related to disaster relief. In addition, the Corporation administers the national service trust, which pays for national service education awards under the statute. For fiscal year 1994, the Congress appropriated $370 million for the Corporation plus $207 million for programs under the former ACTION agency that the Corporation now administers. AmeriCorps*USA allows participants to earn education awards to help pay for postsecondary education in exchange for performing community service that matches priorities established by the Corporation. Participants earn an education award of $4,725 for full-time service or half of that amount for part-time service. A minimum of 1,700 hours of service within a year is required to earn the full $4,725 award. The Corporation requires that programs devote some portion, but no more than 20 percent, of participants’ service hours to nondirect service activities, such as training or studying for the equivalent of a high school diploma. To earn a part-time award, a participant must perform 900 hours of community service within 2 years (or within 3 years in the case of participants who are full-time college students). Individuals can serve more than two terms; however, they can only receive two education awards. The awards, which are held in trust by the U.S. Treasury, are paid directly to qualified postsecondary institutions or student loan lenders and must be used within 7 years after service is completed. In addition to the education award, AmeriCorps*USA participants receive a living allowance stipend that is at least equivalent to, but no more than double, the average annual living allowance received by VISTA volunteers—about $7,640 for full-time participants in fiscal year 1994. Additional participant benefits include health insurance and child care assistance for participants who need them. Individuals can join a national service program before, during, or after postsecondary education. A participant must be 16 or older and be a citizen, a national, or a lawful permanent resident of the United States. A participant must also be a high school graduate, agree to earn the equivalent of a high school diploma before receiving an education award, or be granted a waiver by the program. Selection of participants is not based on financial need. In its fiscal year 1994 appropriations, the Corporation anticipated fielding about 18,350 full- and part-time AmeriCorps*USA participants. The Corporation awarded about $149 million of its fiscal year 1994 appropriations to make about 300 grants to nonprofit organizations and federal, state, and local government agencies to operate AmeriCorps*USA programs. About two-thirds of the grant dollars were awarded through state commissions on national service set up by the 1993 act to provide oversight to state programs. The remaining one-third of the AmeriCorps*USA grant monies was awarded directly by the Corporation to national nonprofit organizations and federal agencies. Grantees were to be selected on the basis of their proposed national service programs’ quality, innovation, and sustainability. Sustainability was evaluated on the basis of community support for a program and a grantee’s ability to raise other funds from multiple sources, including the private sector. Grant recipients use grant funds to pay up to 85 percent of the cost of participants’ living allowances and benefits (up to 100 percent of child care expenses) and up to 75 percent of other program costs, including participant training, education, and uniforms; staff salaries, travel, transportation, supplies, and equipment; and program evaluation and administrative costs. Grants are based in part on the number of participants the program estimates it will enroll during the year. If participants leave the program during the year, the Corporation may either allow the program to redirect participant stipend and benefit funds to other program expenses or take back any unused portion of the grant. To ensure that federal Corporation dollars are used to leverage other resources for program support, grantees must also obtain support from non-Corporation sources to help pay for the program. This support, which can be cash or in-kind contributions, may come from other federal sources as well as state and local governments, and private sources. In-kind contributions include personnel to manage AmeriCorps*USA programs as well as to supervise and train participants; office facilities and supplies; and materials and equipment needed in the course of conducting national service projects. Consistent with the legislation, federal agencies can receive grants to support AmeriCorps*USA volunteers who perform work furthering the agencies’ missions. Federal agency grantees are to use their own resources in addition to the Corporation grant to integrate national service more fully into their mission work. Furthermore, as is the case with nonfederal agency programs, Corporation regulations state that federal agencies are ultimately intended to support their service initiatives without Corporation resources. In its first program year, AmeriCorps*USA relied heavily on public support. The Corporation’s appropriations accounted for slightly less than two-thirds of resources available for AmeriCorps*USA grantees. When Corporation appropriations were combined with resources from other federal agencies and state and local governments, the public sector provided about 88 percent of the $351 million in total program resources available. Federal resources accounted for 74 percent (about $260 million), while state and local government contributions made up 14 percent ($50 million). Private cash and in-kind contributions constituted the smallest share of resources, amounting to about 12 percent (or about $41 million). Most of the Corporation’s funding for AmeriCorps*USA projects went to providing operating grants and education awards. Of the Corporation’s funding, 61 percent financed operating grants. Slightly over one-quarter supported participants’ education awards, while the remainder went toward Corporation program management and administration. Most of the matching contributions AmeriCorps*USA programs have received came from public as opposed to private sources. About 69 percent of all matching resources came from either a federal or a state or local government source, with the split between cash and in-kind contributions being about 43 percent (about $57 million) and 26 percent (about $34 million), respectively. The remaining 31 percent of matching resources were from private sources, with cash and in-kind contributions accounting for 17 percent (about $23 million) and 14 percent (about $18 million), respectively. In calculating resources available on a per-participant and per-service-hour basis (see table 1), we found that average resources available from all sources per AmeriCorps*USA participant amounted to about $26,654 (excluding in-kind contributions from private sources). This amounted to about $16 per service hour or about $20 per direct service hour, assuming 20 percent of the 1,700 hours of total service was nondirect service time. Although these figures represent resources available for all program expenses, they are not the equivalent of annual salaries or hourly wages for participants. See appendix II for more detailed results and appendix III for sampling error and sensitivity analysis results. It is important not to equate our funding information with cost data. Because most AmeriCorps*USA programs are still in their first year of operations, actual cost cannot yet be determined. Funding and in-kind contributions from sources other than the Corporation were reported to us in May 1995 as resources already received or those that program directors were certain of receiving by the end of their current operating year. Therefore, actual resource and expenditure levels may prove to be higher or lower than indicated by the estimates reported to us. During the course of our review, Corporation officials expressed several concerns about our calculations. First, they believed our estimate should be adjusted to reflect start-up costs incurred in the first program year. The Corporation’s position was that because AmeriCorps*USA programs incur initial-year start-up costs that will not recur in the future, first-year costs will be overstated unless start-up costs are capitalized over several years. We did not attempt to systematically identify start-up costs since we focused on resource availability and not cost data. Moreover, during our site visits, we saw little evidence of start-up costs so high that, unless capitalized, they would cause a significant distortion. Most start-up costs consisted of intangible items, such as curricula development and program planning, rather than conventional capital acquisitions like buildings and machinery. Second, Corporation officials said it is likely that not all education award money will be used. Thus, they believed our estimate for education award funding may be overstated. However, we found no reliable data or basis to make such an estimate to adjust our per-participant estimates. We have included the full value of the awards in our calculations because the Congress appropriates funds specifically for these awards and the funds are held in trust and available to those who earn them for 7 years. Moreover, our methodology is identical to the way the Corporation calculates its own cost estimates. Third, Corporation officials believed our per-hour calculations are overstated. The Corporation believes participants will complete substantially more than the 1,700 service hours required by law. As support, they provided us information on participants serving in the VISTA program and preliminary figures about some participants who have completed their AmeriCorps*USA service. We used the 1,700 figure because this is the minimum established by law, and program participants are required to only attain, not exceed, it. Also, from discussions with officials at the seven programs visited, we found that while some participants might exceed these hours, many others would have difficulty meeting the requirement. This is particularly true in programs that started later in the year than expected. In addition, we do not consider the VISTA participant data appropriate because VISTA is a different type of program, with participation requirements and target populations that differ from AmeriCorps*USA’s. The data the Corporation provided on AmeriCorps*USA participants was based on about 1,000 of an estimated 18,350 current AmeriCorps*USA participants, or about 5 percent. These participants may not be representative of those still serving. The Corporation also believes participants will spend substantially less than 20 percent of their service hours on nondirect service activities. In calculating resources per direct service hour, we used 80 percent of the total 1,700 hours (or 1,360 hours) as the basis for estimating direct service. While we recognize that the Corporation’s regulations make 20 percent the maximum amount of service time that may be spent on training, programs we visited appeared likely to use the full 20 percent for training. In addition, because programs have not all completed their first year, the Corporation could not provide us with data on the portion of time spent on nondirect service. We found significant differences in levels of resources available for nonfederal versus federal programs (see table 2). On average, AmeriCorps*USA programs operated by nonprofit, state, and local agencies received about $25,800 in cash and in-kind contributions per participant. In contrast, programs sponsored by federal agencies received about $31,000 in cash and in-kind contributions per participant—about 20 percent more than programs administered by nonfederal grantees. In addition, federal agencies relied far more on non-Corporation federal resources than their counterparts. On average, federal agency grantees had about $15,500 in cash and in-kind contributions available per participant from federal sources other than the Corporation. Non-Corporation federal funds accounted for about 50 percent of total resources available to federal grantees. Nonfederal AmeriCorps*USA grantees received resources of less than $800 per participant from non-Corporation federal sources, or about 3 percent of their total resources. For more detailed resource information and a description of AmeriCorps*USA programs sponsored by federal agencies, see appendix IV. In its mission statement, the Corporation had identified several objectives that spanned a wide range of accomplishments, from very tangible results to those much harder to quantify. During our site visits, we observed local programs helping communities. AmeriCorps*USA has also sponsored an evaluation of its own that summarized results at a sample of programs during their first 5 months of operation and identified diverse achievements related to each service area. One of AmeriCorps*USA’s objectives was to help the nation meet its unmet human, educational, environmental, and public safety needs, or as the Corporation states it, “getting things done.” Our visits to programs also identified diverse achievements. We observed participants renovating inner-city housing, assisting teachers in elementary schools, maintaining and reestablishing native vegetation in a flood control area, analyzing neighborhood crime statistics to better target prevention measures, and developing a program in a community food bank for people with special dietary needs. Officials at the sponsoring organizations spoke of being able to accomplish tasks that their limited resources had previously prevented them from accomplishing. AmeriCorps*USA’s legislation identified renewing the spirit of community as an objective, and the program’s mission includes “strengthening the ties that bind us together as a people.” We observed several projects focused on rebuilding communities. For example, a multifamily house being renovated was formerly a congregating spot for drug dealers. Program officials believe that after completion, it will encourage other neighborhood improvements. Another team built a community farm market and renovated a municipal stadium, both of which a town official stated will continue to provide economic and social benefits to the community. Another way to meet this objective was to have participants with diverse backgrounds working together. Participants of several programs we visited spanned a wide age range, from teenagers to retirees. Teams also showed diversity in educational, economic, and ethnic backgrounds. Participants said that a valuable aspect of the program was working with others with different backgrounds and benefiting from their strengths. Another of AmeriCorps*USA’s program objectives was to foster civic responsibility. We saw evidence of this at programs such as one where participants devoted half of each Friday to working on community service projects they devise and carry out independently. Participants at another program, in which they organized meetings to establish relationships between at-risk youth and elderly people, commented that this work had taught them how to organize programs, experience they believed would be helpful as they took on roles in their communities. Training periods included conflict resolution techniques and team-building skills. Both the AmeriCorps legislation and the Corporation’s mission identified expanding opportunities as an objective. In practice, individuals who participate in national service have their educational opportunities expanded by the education awards, which help them pursue higher education or job training. At the sites we visited, participants indicated that the education award was an important part of their decision to participate in AmeriCorps*USA. Programs also supported participants in obtaining high school degrees or the equivalent. According to Corporation regulations, a full-time participant who does not have a high school diploma or its equivalent generally must agree to earn one or the other before using the education award. In one program, a general equivalency diploma (GED) candidate was receiving classroom instruction and individual tutoring. She had recently passed the preliminary GED test after failing the GED test five times. After doing some extra preparation for the math portion, she will take the actual GED test again. A larger program that recruited at-risk youth, most of whom do not have high school degrees, provided classroom instruction related to the service that participants performed, such as a construction-based math curriculum. Program officials said most of the participants are enrolled in high school equivalency courses and that at least five have already passed the GED test. We also saw programs that offer participants the chance to get postsecondary academic credit. One such program, affiliated with a private college, offered participants the option of pursuing an environmental studies curriculum through which they can earn up to six upper-level credits at a reduced tuition. Half of the participants have chosen to do so. A second program allowed participants to earn 36 credit hours toward an associate’s degree in the natural sciences through their service, which can lead to state certification as an environmental restoration technician. In addition to formal education opportunities, some participants said they were attracted to AmeriCorps*USA programs because the programs provide service in specific fields. We spoke with several participants who wanted experience in those fields to improve their skills and expand their opportunities. For example, a community policing program attracted 15 participants who are pursuing law enforcement careers. Similarly, in a youth conservation corps program in which most participants have environmental science degrees, many participants sought practical experience to complement their formal education. For more detailed results from our site visits, see appendix V. In commenting on a draft of this report, the Corporation agreed with the amount we reported as federal (Corporation and non-Corporation) cash resources made available to AmeriCorps*USA programs. However, the Corporation took exception to our including anything other than federal cash resources in determining total resources available to these programs. Other program resources we included were in-kind resources provided by federal agencies and cash and in-kind resources provided by state and local governments as well as by private contributors. The Corporation also disagreed with the methodology we employed to develop and report on total available resources per participant and per service hour. The Corporation believed we should have excluded resources other than federal cash from our calculations because these resources were not a burden to the federal taxpayer, AmeriCorps*USA programs were legally required to obtain these resources, and these resources should have been considered benefits rather than costs. As we have clearly noted in our report, our objective was not to determine whether AmeriCorps*USA was cost-effective. We drew no conclusions about the cost of the program, the value of program benefits, or whether the program was meeting its objectives. Contrary to the Corporation’s view, we believed that ignoring significant amounts of AmeriCorps*USA program resources would undermine the importance that these resources play in fielding AmeriCorps*USA participants. In our view, an accounting of the total resources available to support an AmeriCorps*USA participant provides a useful perspective on the program. This report presents the only information available to date on total resources available to AmeriCorps*USA programs nationwide and captures this information by resource stream—that is, by federal, state and local, and private sources. Knowing the total resources available to the program is critical information for decisionmakers. In addition, such information can demonstrate the degree of partnership between the public (federal, state, and local government) and private sectors. The Corporation’s comments, and our assessment of these comments, appear in appendix VI. In addition to these comments, the Corporation provided us with technical comments, which we have incorporated into the report where appropriate. We are providing copies of this report to the appropriate House and Senate committees and other interested parties. Please call me at (202) 512-7014 or Wayne B. Upshaw, Assistant Director, at (202) 512-7006 if you or your staff have any questions. Other GAO contacts and contributors to this report are listed in appendix VII. To obtain resource and participant information, we surveyed a random sample of nonfederal AmeriCorps*USA grantees and gathered data on all federal grantees. We asked these grantees to detail their sources and amounts of available program resources and number of participants. To estimate program totals, we projected data from the nonfederal grantee sample to the universe of nonfederal grantees and combined them with federal grantee data. Because nonfederal programs were so numerous, we collected information from a sample of 80 nonfederal programs. Our sample was randomly selected from the 284 nonfederal programs identified by reviewing Corporation files. We received responses from 75 of the 80 programs that we surveyed. We obtained data from them on the sources of their available resources, asking them to include all resources “devoted specifically to your AmeriCorps program” received “for use during your program’s initial funding period.” We also asked them for information on the number of (1) full-time and part-time AmeriCorps*USA participants who were currently enrolled or had successfully completed service requirements and (2) additional full- and part-time participants expected to enroll before the end of the initial funding period. We summed these to reflect the number of full-time-equivalent (FTE) participants who are likely to eventually meet service requirements this year. We also gathered the same data—resources available and numbers of participants—from the 13 federal agencies administering AmeriCorps*USA programs. One agency, the Department of Health and Human Services, operated 3 separate programs, so our information covered 15 programs. Since programs are only in their first year, we could not obtain final spending or participant totals. We gathered information on resources that were available to date and those the programs told us they were certain to receive by the end of the initial funding period. We cannot say whether all resources will be used over the course of the funding period. Similarly, our FTE participant total will not reflect either attrition that occurred after we conducted our field work or instances in which slots expected to be filled are ultimately unfilled. The data obtained were self-reported by program officials and were not independently verified with other sources. We provided the grantees with a form identifying the information needed, explained the questions on the form to them, answered their questions about our data needs, reviewed the responses, and followed up with further questions whenever responses were incomplete or inconsistent. Several assumptions underlie our estimates of available resources per participant. First, we assumed that a program will return a pro rata share of its Corporation grant if it has fewer participants than anticipated at the time of its grant application. The Corporation may require a program to return the grant portion that would have gone toward participant living expenses and benefits. We did not make a similar adjustment to a program’s non-Corporation resources because we obtained this information in May 1995, late enough in programs’ operating year for them to predict available resources and participant levels. Second, we considered contributions from public universities as public resources, and those from private universities, private. We made this assumption realizing that both public and private universities receive a mixture of public and private support, but given the reliance of public postsecondary institutions on public support, we believe such an assumption is appropriate. Third, we made no adjustment to in-kind contributions reported to us although we recognize that these resources are sometimes very difficult to value. Fourth, we calculated available resources per participant on a full-time-equivalent basis, counting a part-time participant as 50 percent of an FTE participant. In calculating available resources, we excluded private in-kind contributions from our per-participant and per-service-hour calculations. To determine per-participant resources associated with the Corporation’s administrative responsibilities, we combined the following three components. First, we allocated fiscal year 1995 National and Community Service Trust Act appropriated funds for this purpose across the estimated number of AmeriCorps*USA participants and other programs covered under this appropriation. Second, we divided fiscal year 1994 AmeriCorps*USA program planning grants by the number of estimated AmeriCorps*USA participants. Third, we divided fiscal year 1995 grants covering state commission operating expenses by the number of estimated AmeriCorps*USA participants. In calculating total Corporation resources per participant, we added $4,725 per FTE for the education award because the Corporation incurs this liability for each full-time participant. To the extent that participants do not actually take advantage of their awards, funds expended would be lower than our estimate. We produced estimates for nonfederal programs from our sample and added the data on all federal programs to obtain estimated totals for all AmeriCorps*USA programs. We used a ratio estimation methodology to estimate available resources and participation for all nonfederal AmeriCorps*USA programs. This method incorporated information on anticipated matching resources and numbers of participants from the programs’ grant applications. To estimate resources, we computed the ratio of actual to anticipated matching resources for our sample programs, and we applied this ratio to total anticipated matching resources for all nonfederal programs. Similarly, to estimate participants, we applied the ratio of actual to anticipated participants for the sample programs to the number of anticipated participants in all programs. These estimates of resources and participants were used to calculate the available resources per participant for nonfederal programs. To estimate total resources and participants for all AmeriCorps*USA programs, we combined estimated totals for nonfederal programs with federal program totals. Our federal program information was not projected from a sample because we had information from every federal program. We used the combined results to produce an overall estimate for AmeriCorps*USA programs. Per FTE (n= 2,054) Per FTE (n= 12,519) Items may not sum to subtotals or totals because of rounding. We tested the stability of our per-participant resource estimates. First, we calculated a 95-percent confidence interval around our results to examine the extent of possible sampling error. Second, we analyzed the effects of changing some of the assumptions we made about AmeriCorps*USA grantees’ program operations. Because our estimates incorporated results from a sample of nonfederal programs, a sampling error is associated with these estimates. We estimated both total resources and the number of participants using a ratio methodology, and the calculation of resources per participant is a ratio of these ratios and has its own sampling error. At the 95-percent confidence level, the sampling error for our estimated resources for nonfederal programs of $25,797 per participant is plus or minus $810 (see table III.1). The sampling error for our total estimate of $26,654 per AmeriCorps*USA member is about $120 lower than for the nonfederal estimate. If a program had fewer participants than anticipated when it applied for its AmeriCorps*USA grant, grant money that would have paid for living expenses and benefits for the participants who did not appear could be returned to the Corporation. Our baseline estimate assumed that all of the per-participant grant funds from the Corporation would be returned for each participant the program was short of its anticipated number. We recalculated the resources per participant using the alternative assumption that no Corporation funds were returned (see table III.2). Baseline (all per-participant Corporation funds returned) This assumption did not affect our estimates of other federal, state and local government, and private resources. The changes shown affected only a small portion of the estimate of Corporation funding per participant. Many components of the Corporation funding estimate—for example, the education award of $4,725 per participant—were based on the actual number of participants. If a program reported a contribution from a public university, we included it as a state or local government contribution, as appropriate. Because such universities receive private as well as public support, and because the programs could not separate these resources by private or public source, we cannot be certain these were indeed all public resources. We estimated resources per participant by source again, counting public university contributions as private contributions (see table III.3). Baseline (public university contributions counted as public) This assumption affected both the allocation of resources between public and private sources and the total level of resources per participant. Changing the assumption decreased the level of state and local government resources by about $420. Of this amount, about $60 was cash and was added to private resources. The remainder, about $360, was in-kind; because we did not include private in-kind contributions, the total resources per participant using the alternative assumption decreased by about $360. We were not certain that respondents valued in-kind contributions consistently. To see how sensitive our estimate was to the level of in-kind contributions reported, we estimated totals again, valuing in-kind contributions as 50 percent and then 150 percent of the figures reported to us (see table III.4). Baseline (in-kind contributions valued as reported) This assumption affected matching resource estimates for federal, state, and local government sources, but it did not affect the estimate for Corporation resources. Of the approximately $9,000 match total, about $2,700, or nearly one-third, consisted of in-kind contributions, and about $6,300 consisted of cash contributions. Thus, the total in each alternative differed from the baseline by about $1,350, or one-half the approximately $2,700 in-kind total. AmeriCorps*USA programs operated by federal agencies, on average, entailed a larger commitment of federal resources than nonfederal AmeriCorps*USA programs. The federal agencies largely used their own resources, through either cash or in-kind contributions, to supplement their Corporation grants. To learn more about the funding and operation of these programs, we spoke with officials at all 13 federal agencies that received AmeriCorps*USA grants for the 1994-95 program year. Thirteen federal agencies were awarded AmeriCorps*USA grants in 1994 to fund 15 programs. Of the nearly $149 million awarded in Corporation operating grants, federal agencies received about $14.6 million, or about 10 percent of the total. About 2,400 participants, or 16 percent of all participants, served in programs sponsored by federal agencies. Total resources available for federal agency-sponsored programs ranged from about $22,200 per participant for the National Institute for Literacy’s Literacy*AmeriCorps program to $66,700 per participant for the Department of the Navy’s Seaborne Conservation Corps (see table IV.1). We did not analyze the reasons for the differences in per-participant resource availability because it was beyond the scope of this study. AmeriCorps*USA programs sponsored by federal agencies varied in size from 22 full- and part-time AmeriCorps*USA participants for the Department of Veterans Affairs’ program to about 1,200 for the Department of Agriculture’s (USDA) program. The number of operating sites per program varied as well. For example, the Navy’s program has one site in Galveston, Texas, while USDA’s program operated at 326 sites across the country. Many of the federal agencies did not directly administer their AmeriCorps*USA programs. These agencies subgranted their Corporation awards to partner organizations, usually nonprofits that have responsibility for day-to-day operations of the programs and oversight of AmeriCorps*USA participants. In general, the federal agencies served as grant administrators and liaisons between the Corporation and the nonprofit program partners. In addition, some agencies provided technical assistance and training. Corporation award (adjusted) The 15 agency programs varied widely in their scope and missions. All information presented here, including descriptions of the agency mission fulfilled by the AmeriCorps*USA program, participant tasks, and funding streams, was provided by agency officials. USDA had the largest of the federal agency AmeriCorps*USA programs. The program operated at 326 sites in 38 states, with some sites hosting only one AmeriCorps*USA participant. These sites, located nationwide, emphasized one of three areas: fighting hunger, protecting the environment, and rebuilding rural America. Nutrition education is a major element of the Anti-Hunger, Nutrition, and Empowerment Team’s work. Members provide nutrition assistance to the poor, senior citizens, and schools. Members of the Public Lands and Environment Team help repair and upgrade community facilities, protect watersheds, and preserve and restore national forests. The Rural Development Team helps protect water quality, improve housing, respond to disasters, and generally promote economic development. Most AmeriCorps/USDA participants joined the program in September 1994. The Seaborne Conservation Corps (SCC) is a military-style residential environmental and drug awareness education project sponsored by the Department of Defense/Department of the Navy, the Texas National Guard, and Texas A&M University at Galveston, which has primary responsibility for day-to-day program operations. SCC is a 9-month residential program, and participants are considered part time. SCC provides high school dropouts with an opportunity to earn their general equivalency diploma (GED) and acquire a basic seaman’s license while performing environmental services for the community. SCC evolved out of the Junior Leadership Corps, a program of the Navy’s Drug Demand Reduction Task Force, which was designed to support military dependents. SCC supports the task force’s mission to develop and support programs that decrease the demand for illegal drugs within and, when directed, outside of the Navy. SCC members joined the program in September 1994. The Department of Energy’s Salmon Corps aims at restoring the salmon habitat along the Columbia River Basin in Oregon, Washington, and Idaho. Through the coordination of its operating partner, the Washington, D.C.-based nonprofit Earth Conservation Corps, the Salmon Corps brings together five Native American tribes located in these three states. At the five tribal sites, Salmon Corps participants restore salmon habitats damaged by hydroelectricity production. Tasks include removing trash and debris, building fences to restrict livestock access to salmon habitats, and renovating historically significant properties. As is the case with other AmeriCorps*USA programs sponsored by federal agencies, participants do work that the Department is already mandated by law to carry out. The program meets at least two departmental missions: (1) reducing the impact of energy production and use and (2) helping to develop a technically trained, diverse workforce and enhance scientific and technical literacy. Salmon Corps participants joined in September 1994. The Environmental Protection Agency (EPA) sponsors six AmeriCorps*USA programs operated at nine sites. Each program has a different focus that helps fulfill EPA’s missions. Participants of the Drinking Water Contamination program in El Paso, Texas, which is run by the University of Texas at El Paso and the Texas Natural Resources Conservation Commission, identify sources of contamination in public drinking water wells and educate the community on methods of managing and preventing water pollution. Another AmeriCorps*USA team, overseen by EPA local staff, works with residents of 15 native Alaskan villages to reduce, reuse, and recycle their waste. Participants in Oregon and Washington states help public schools adopt energy-conserving technology available through the EPA’s Green Lights program. That AmeriCorps*USA program is a partnership with the Bonneville Power Administration and Oregon and Washington State Energy Offices. Revitalization of inner-city neighborhoods in Boston, Massachusetts, and Providence, Rhode Island, is the focus of the fourth program, which utilizes AmeriCorps*USA participants from City Year, the nonprofit partner. In the fifth program, which is operated by nonprofit organizations or state or local government agencies at four sites (Oakland, California; Newark, New Jersey; Atlanta, Georgia; and Tacoma, Washington), AmeriCorps*USA participants restore urban streams and educate residents about the dangers of lead and radon contamination. EPA’s sixth program, the Anacostia River Restoration, is run by the Metropolitan Council of Governments in Washington, D.C. AmeriCorps*USA participants joined EPA’s programs in September 1994. The Department of Health and Human Services’ FamilyServe had AmeriCorps*USA participants working in three Head Start programs, two located in migrant communities in Texas and Florida and one run by a tribal college on Indian reservations in Montana, North Dakota, and South Dakota. AmeriCorps*USA participants assist staff in local Head Start child care centers by, among other things, conducting nutrition programs, linking residents with community medical resources, and providing recreation activities for children. These activities support the Department’s mission to enhance the quality of early childhood development. Head Start FamilyServe participants joined in September 1994. The Department of Health and Human Services’ Administration on Developmental Disabilities (ADD) operates the ADD Corps in three states: Georgia, Alabama, and Pennsylvania. Its goal is to improve the independence, productivity, and community integration of people with developmental disabilities. The ADD Corps furthers ADD’s mission to support and encourage the provision of quality services to persons with developmental disabilities. AmeriCorps*USA participants at the ADD Corps sites, which are run by local university-affiliated developmental disability programs, provide support services to people who are disabled. ADD Corps participants, some of whom are themselves disabled, joined the program in October 1994. The Health Resources and Services Administration’s (HRSA) Model Health Service Corps is designed to enhance community health resources. Improving access to quality comprehensive primary health care and related services is a main goal of HRSA. Corps participants, who are health profession students and community health workers, work at one of three program sites: Philadelphia and Pittsburgh, Pennsylvania; and Chicago, Illinois. At each site, HRSA has a local organization as a partner that administers the project: the Health Federation of Philadelphia, the Allegheny County Health Department, and the Chicago Health Consortium. HRSA Corps participants provide services that increase access to health care for community residents. These services include home visits, referrals, transportation, and child care. HRSA Corps participants joined in September 1994. The Department of the Interior funds five projects, located throughout the country, designed to support Interior’s environmental conservation mission. Four of the projects are partnerships of one or more of the Department’s agencies (the National Park Service, the Fish and Wildlife Service, the Bureau of Land Management, and the Bureau of Reclamation) and nonprofit organizations that, with local agency officials, manage the projects and provide administrative support to the projects. Participants in two of these projects help restore and protect the environment in the Florida Everglades and along the Rio Grande. The Student Conservation Association is the nonprofit partner for both of these projects. Participants at a project at Fort Ord in Monterey, California, help transform the former military base into public recreational land. The Califonia Conservation Corps is the nonprofit partner. At the Southern California Urban Water Conservation project, located in Los Angeles, California, a partnership with the nonprofit Executive Partnerships for Environmental Resources Training (ExPERT), participants distribute water-saving fixtures to low-income residents. The fifth project, run by the Department’s U.S. Geological Survey, has sites in Virginia, Hawaii, California, Nebraska, Wisconsin, and Georgia, where participants help staff update national geological and hydrological information. AmeriCorps*USA participants joined the Interior programs in September 1994. The Department of Justice’s JustServe program operates through recipients of its Weed and Seed program grants. Justice’s Weed and Seed program promotes community-oriented approaches to crime-fighting. Weed and Seed programs are designed at the local level and run by a coalition of local government representatives and community members. Local Weed and Seed grant recipients were required to incorporate a plan for using AmeriCorps*USA participants into their Weed and Seed grant applications. AmeriCorps*USA participants work at one of seven Weed and Seed sites: Seattle, Washington; Los Angeles, California; Fort Worth and San Antonio, Texas; Trenton, New Jersey; Philadelphia, Pennsylvania; and Madison, Wisconsin. AmeriCorps*USA participants carry out activities in three priority areas determined by Justice that enhance local Weed and Seed efforts: (1) enhancing community police efforts by working with local police staff, (2) assisting schools by conducting conflict mediation and drug prevention programs, and (3) supporting social services by helping community members obtain services. JustServe participants joined the program in September 1994. Youth Fair Chance (YFC) is a Department of Labor pilot program designed to concentrate resources and services in high-poverty areas to benefit the community. The Department awards YFC grants to local organizations, such as private industry councils, mayor’s offices, or local nonprofit organizations, that run the YFC projects. AmeriCorps*USA participants augment existing YFC projects, increasing resources available at the eight sites: Seattle, Washington; Fresno and Los Angeles, California; Fort Worth, Texas; Memphis, Tennessee; New York City, New York; rural Kentucky; and Baltimore, Maryland. Participants’ activities focus on literacy, public safety, and assisting expectant and new teen mothers and children. YFC participants joined in January 1995. The National Endowment for the Arts is a federal agency that supports the visual, literary, and performing arts. Writers Corps participants provide programs and activities that promote accessibility to the arts for young and old people residing in inner cities, particularly in those neighborhoods with very limited access to art resources. These are mostly neighborhoods with high crime rates and drug activity. The Writers Corps operates at three sites: the Bronx, New York; San Francisco, California; and Washington, D.C. Writers Corps participants joined in September 1994. The National Institute for Literacy, created in 1991, is an independent agency managed by the Departments of Education, Labor, and Health and Human Services. The Institute does not deliver services directly but instead helps to establish collaborations among literacy groups at the state and local level and provides technical assistance to these collaborative programs. The goal of the Institute’s Literacy*AmeriCorps program is to establish models of one-on-one literacy tutoring programs at the local level that can be replicated nationwide. Literacy*AmeriCorps has four program sites: Seattle, Washington; Houston, Texas; New Orleans, Louisiana; and Pittsburgh, Pennsylvania. At each site, a local coalition of literacy organizations manages the program. In addition to one-on-one tutoring, AmeriCorps*USA participants help strengthen local literacy coalitions and recruit students from homeless shelters and public housing developments to participate in literacy programs. Literacy*AmeriCorps participants joined in September 1994. The Neighborhood Reinvestment Corporation is a federally chartered, public nonprofit corporation that provides technical assistance to support nonprofit organizations that make up its NeighborWorks network. NeighborWorks is a partnership of 173 nonprofit groups that work with residents and government and business leaders to revitalize urban and rural neighborhoods and make affordable housing available. NeighborWorks Community Corps programs operate in 16 cities: Baltimore, Maryland; Chicago, Illinois; Clearwater, Florida; Las Cruces, New Mexico; Los Angeles and Pasadena, California; New York City, New York; Savannah, Georgia; St. Paul, Minnesota; Allentown, Pennsylvania; Chattanooga, Tennessee; New Orleans, Louisiana; and Hartford, New Britain, New Haven, and Stamford, Connecticut. AmeriCorps*USA participants help provide affordable housing and increase neighborhood volunteer activity to revitalize communities. NeighborWorks Community Corps participants joined in July 1994. The Department of Transportation funds three AmeriCorps*USA programs. Two, located in Baltimore, Maryland, and Vancouver, Washington, are sponsored by the Federal Highway Administration. The third program, located in Washington, D.C., is sponsored by the Department’s Federal Transit Administration. Each project is administered by a nonprofit or state or local government organization. Community Building in Partnership administers the Baltimore program; the Washington Service Corps and City of Vancouver administer the Vancouver program; and the D.C. Service Corps administers the Washington, D.C., program. The three programs contribute to the Department’s goal of bringing together transportation and community service by implementing programs that improve the safety and accessibility of transportation systems. AmeriCorps*USA participants at the Baltimore site help with activities that include street maintenance, demolition, and landscaping; at the Vancouver site, they clean up and rehabilitate local walking trails; at the Washington, D.C., site, they assist elderly residents in using public transportation. AmeriCorps*USA participants joined in September 1994. The Department of Veterans Affairs’ Collaboration for Homeless Veterans program operates at two sites: Los Angeles, California, and Houston, Texas. AmeriCorps*USA participants address the needs of homeless veterans. Some of the participants themselves are veterans. The Los Angeles site is managed by local nonprofit community service organizations that provide veterans’ services and the local Veterans Affairs medical center. There, AmeriCorps*USA participants help to renovate a building formerly used as a corporate training site into transitional housing for homeless veterans. The Houston site is jointly managed by the Department’s regional office and Stand Down Homes, a nonprofit organization. There, AmeriCorps*USA participants renovate foreclosed properties, turning them into housing for homeless veterans. AmeriCorps*USA participants joined in September 1994. We visited seven AmeriCorps*USA grantees’ programs to obtain more detailed information on amounts and sources of available resources, to verify program accomplishments, and to gain insight into local program operations. These programs were judgmentally selected to provide examples of a wide range of characteristics, such as level of Corporation funding, type of grant, program size, and mission. The following summaries provide information on participant characteristics, funding sources and levels, and operations. We have calculated resources per participant, per service hour, and per direct service hour. We adjusted the Corporation’s grants proportionally based on the number of participants enrolled at the time of our visit as compared with the number of participants originally expected. In calculating resources per participant, we included dollars for two components that local programs did not control—$2,062 for the Corporation’s overhead and $4,725 for the participant’s education award. Although some participants may not use the full education award, the funds are held in trust and available to those who earn them for 7 years. In calculating resources per service hour, we used 1,700 hours as the required number of hours for a full-time-equivalent participant completing the program. In calculating resources per direct service hour, we used 80 percent of the required hours because participants must spend at least 80 percent of their time in direct service rather than in education, training, or similar activities. The information provided represents a “snapshot” of the programs at the time of our visits, but some programs may not use all available funding or may be given additional resources, and participant levels may change. The Montgomery County (Maryland) Police Department began operating the Community Assisting Police program in January 1995. The program’s mission is to engage in community education and outreach projects that address the needs for crime control, prevention, and the reduction of fear in underserved or at-risk communities. The Department applied for AmeriCorps*USA funding to implement a community policing initiative that had faltered in the face of growing cultural and language barriers between it and the community. Per FTE (n=23.5) Participant ages range from 17 to 64. Most are in college or are college graduates. A third of the original participants were bilingual, speaking a total of six languages. Half were pursuing law enforcement careers. Program officials projected that 90 percent will use the education award. The program budgeted for 20 full-time and 10 part-time participants; at the time of our visit, the program had 18 full-time and 11 part-time participants. More recently, the program has lost six of its original participants for reasons including insufficient financial support, employment or education opportunities, and the inability to function independently in an unstructured environment. New participants have been trained and have replaced them. Participants complete 1 week of training at the outset, which focuses on technical skills and team-building training. They work either as community mobilizers or victim assistance advocates at headquarters, six satellite facilities, five district stations, one mobile facility, the State Attorney’s Office, and the County Division of Victim Services. Each week, the participants meet for 2 hours of additional training and reflection. Participants record their hours and activities on weekly time sheets signed by their supervisors. In addition, supervisors evaluate their efforts and work using a detailed performance evaluation form after the first 2 months, after 6 months, and at the end of participants’ service. Full-time participants received biweekly living allowance stipends of $400, while part-timers received $140. During site visits at two satellite police stations, we observed participants performing outreach services at schools, businesses, and residential groups. Participant projects included distributing flyers to alert residents of area car thefts, coordinating a school Crime Awareness Day, organizing a date-rape presentation at a high school, analyzing neighborhood crime statistics to identify developing problems, providing bilingual referrals and assistance to crime victims, and educating senior citizens about how to protect themselves from crime. The Boston University School of Public Health is the primary administrator of the AmeriCorps Health and Housing Fellows program. Boston University collaborates with three partner institutions to operate the program: University of Alabama at Birmingham, University of Texas at El Paso School of Public Health, and Johns Hopkins University School of Nursing at Baltimore, Maryland. This integrated work-service-education program uses the skills of returned overseas Peace Corps volunteers to serve targeted communities needing improvements in health and living conditions. The four universities operate the program in separate urban sites located at housing authorities, transitional housing and homeless shelters, community health and social service agencies, and urban health centers. Corporation program funds are matched by other resources from federal, state, and county sources; service agencies; and private and university sources. Per FTE (n=17) Program participants are recruited from a pool of returned Peace Corps volunteers with an average age of 30 and who have at least a bachelor’s degree. AmeriCorps*USA participants undertake full-time or part-time community service while pursuing graduate education programs at one of the four universities. Participants receive a work stipend (ranging from $3,397 to $6,500 per year), financial assistance for matriculation in a master’s in public health or a graduate nursing program, an AmeriCorps*USA education award, and, at Boston University, program housing assistance. The program is funded for 36 participants: 26 full-time and 10 part-time slots. Program officials stated that primarily because of late funding decisions, the program was unable to meet its participation goal; the program had only 12 full-time participants at the time of our visit. Boston University had 5 of 15 planned full-time participants, Alabama had 5 of 6, and Texas had 2 of 5 planned full-time participants; Johns Hopkins had 10 part-time participants as planned. Part-time participants must complete 900 hours of community service for their 2-year program commitment while enrolled in a university nursing program. Full-time participants must complete 1,700 hours of public health community service for each year of program participation while enrolled in a university public health program. Boston University participants are required to live either in a housing authority project or within the same community where the authority is located; other university sites have no such requirement. Participants are required to have daily logs of times and functions completed and provide monthly progress reports to supervisors for forwarding to program administrators. Boston University, as program administrator, has total program oversight. After conducting an evaluation, Boston University decided to close the Texas site because it was not meeting program participant goals. In Massachusetts, participants implement tenant health and economic self-sufficiency programs specified by the U.S. Housing and Urban Development Family Self-Sufficiency Program. Participants live among and provide services directly to tenants in housing authority projects. A housing authority official commented that these services are provided at her complex at approximately half the cost of an employed social worker; she also said the direct and accepted participant-tenant relationship was a benefit. We spoke with one participant who stated that the program provided a real-world application of public health education principles and made obtaining a graduate degree possible through reduced tuition costs, a living allowance and housing, and an education award. MAGIC ME America, a nonprofit organization founded in 1980 in Baltimore, Maryland, conducts intergenerational service-learning programs. The organization’s mission is to motivate and educate adolescents by involving them in long-term service with elderly and other needy groups and their communities. MAGIC ME is designed to recruit these groups to volunteer in their communities. In addition to engaging youth and elders in service, MAGIC ME involves school and nursing home staff, high school and college interns, business leaders, and community volunteers. The MAGIC ME organization operates three AmeriCorps*USA programs in San Joaquin County, California; Boston, Massachusetts; and Baltimore, Maryland. AmeriCorps*USA participants establish and conduct MAGIC ME groups and recruit and train volunteers and interns. Program officials reported that in the 1994-1995 period, 25 AmeriCorps*USA participants developed 95 MAGIC ME groups involving 2,660 youth and elderly. In addition, they recruited and trained 500 interns and volunteers. Per FTE (n=25) Participants ranged from 20 to 40 years old and came from a wide variety of racial, cultural, socioeconomic, and educational backgrounds. Program officials said that all participants plan to use their education awards to begin or continue their postsecondary education. Many plan to enter fields such as teaching and social work. The program was budgeted for 29 full-time participants. These slots were initially filled, but participation had dropped to 25 at the time of our visit because one participant successfully completed the program, one converted to part-time, and the others left for unstated reasons. Members spend approximately 20 percent of their service time training to develop core competencies needed for establishing and conducting MAGIC ME programs, including topics related to gerontology, youth, intergenerational issues, and community problem-solving. Participants report weekly work entries and hours to program monitors; weekly and cumulative service and training hours are recorded separately. Program monitors compile monthly participant progress reports that summarize their entries. Progress reports are compared with earlier monthly work goals developed jointly by the participant and team leader or program administrator. Participants receive a stipend of $364 every 2 weeks for 21 biweekly periods, totaling $7,644. MAGIC ME aims to improve the program’s effectiveness and expand the program to reach youth and elderly nationwide. AmeriCorps*USA helps meet these goals by providing matching program funds and a national presence. Program officials stated that these combined resources have made it possible to expand the number of people served by over 800 percent in the three AmeriCorps*USA program sites. Each of these sites uses a different operating model, which provides MAGIC ME with the opportunity to test new program approaches designed to meet the needs of various communities. The sites range from inner city to agricultural, and participating youth include gang members, gifted and talented students, special education students, and pregnant teens from diverse cultural and ethnic backgrounds. AmeriCorps*USA participants interviewed at the Baltimore site stated that the program has helped them to build their self-confidence and self-esteem and to think of bigger things to do with their lives. All three participants interviewed planned to use their education awards to start or return to college. In addition, we interviewed three staff members at Baltimore area facilities for the elderly who spoke highly of the AmeriCorps*USA participants and their efforts in meeting student, elder, teacher, and care facility personnel needs. Their presence was said to be a key ingredient to the program. The Vermont Office of Economic Opportunity (OEO) began operating the Vermont Anti-Hunger, Nutrition, and Empowerment Corps in December 1994. OEO was awarded its AmeriCorps*USA grant in July 1994; however, because this was a new effort, it used the fall of 1994 to plan. As one of several programs funded by a national direct federal agency AmeriCorps*USA grant to USDA, it operates in five sites in Vermont. OEO signed on as an applicant for funding from the USDA’s AmeriCorps*USA grant to initiate a statewide approach to hunger that would increase participation of low-income and rural residents in federal food assistance programs and teach them about nutrition and how to buy and plant food. Per FTE (n=33) Participants’ ages range from 19 to 67. Most have either attended or graduated from college. Program officials expect all participants to use their education awards. The program was budgeted for 40 full-time slots. At the time of our visit, there were 33 full-time participants on board. Five original participants had left; since our visit, six additional participants have left. Reasons included financial difficulties, extreme personal needs and issues, inability to work independently without explicit instructions, and the stress caused by living in group houses. Participants complete a minimum of 100 hours of training in areas that include nutrition education, federal and state food assistance policy, and community service. They are divided into five crews of up to eight participants that are stationed in one of five regional sites. Four of the five crews live together in group houses. Participants report weekly work entries and hours to team leaders; weekly and cumulative service and training hours are recorded separately. Team leaders compile monthly team progress reports that summarize the individual entries. Participants receive a stipend of $7,660 for the program year; those beginning the program after its start receive an adjusted amount. They receive biweekly checks from which 25 percent of the stipend is automatically deducted to pay their rent. During site visits to a community food bank and a garden tenant program, we saw participants providing special foods and instruction for clients with special dietary needs, conducting outreach efforts to establish a summer food program for the elderly and children as an extension of the school-year program, and teaching low-income and rural residents to plant and buy food to get more and better food for their resources. One member who attended basic education classes and was individually tutored by team members recently passed the preliminary GED test. The Washington (State) Conservation Corps originated in 1983. It is a crew-based youth employment, education, and training program that provides resources, conservation services, and training to meet ecology needs. Through 1993 state legislation, the Corps was mandated to address watershed restoration projects and create jobs in 20 designated counties hardest hit by timber industry reductions. The State Department of Ecology operated the program and conducted projects on a fee-for-service basis for federal and state agencies and private landowners. In 1994, the Corps became affiliated with the AmeriCorps*USA program, thereby providing additional education benefits to program participants. Per FTE (n=91) Participants’ ages ranged from 18 to 28. Most were high school graduates and local residents. This is a 1-year program combining field work on the job and classroom instruction. Participants complete 160 hours of program training in land restoration, which can be combined with a 36-hour college credit certificate program leading to an environmental restoration technician rating and further college education. Participants work in 1 of 16 crews providing land restoration services. The AmeriCorps*USA program only funds the cost of the participants’ education awards. State and work site projects cover the cost of operating the program by paying fees for the Corps’ services. Participants report hourly entries every 2 weeks to their site leader, who in turn reports to the team leader. Both leaders sign off on each participant’s cumulative service and training hours, which are recorded and sent to the Department of Ecology. Participants get a wide variety of field experiences in watershed restoration, reforestation, stream and salmon habitat rehabilitation, forest fire and oil spill response, plus other conservation projects encompassing classroom and field concepts. At the time of our site visit, the program was active at 16 sites; 2 additional sites were temporarily inactive because of inclement conditions. During the site visit, we observed a team conducting flood control area clean-up and an ecology survey for land restoration. A county senior ecologist present said that the team’s work was a valuable resource for land restoration. A program official stated that because the program is highly rated, a large private corporation contracted for a program crew to work on its land. Also, in conjunction with a state community college, the program has developed a college credit certificate program for participants to earn a state-certified environmental restoration technician rating. All 62 AmeriCorps*USA participants who registered for credits with the college received some credit while in the program. Thirty-six AmeriCorps*USA participants have completed the required 36 college credits and are state-certified environmental restoration technicians. The program encourages and supports crew members who are working on their high school equivalency. Between 75 and 85 percent of those working toward the GED have thus far obtained it. The Washington (State) Service Corps originated in 1983 as a state citizen service corps to address the needs of unemployed and needy residents. The present AmeriCorps*USA program began through a competitive bid to the Washington Commission on National and Community Service for funding. The program is a team-based youth employment, education, and training program providing literacy, parenting skills, gang and substance abuse prevention, and family and community social services. Per FTE (n=276) The resources available to the program are 74 percent federal funds and 26 percent state and local funds. Its Corporation grant is the program’s only federal funding. Participants’ ages range from 17 to over 60. Most participants, particularly those in the 18- to 21-year-old group, are from local communities affected by the decline of the timber industry. To a lesser degree, some older citizens and nonstate residents participate in the program. Just over half of the participants have some had some college or have college degrees. Although the majority of the youth are high school graduates, some participants are attempting to complete their diplomas or obtain a GED. Participants work in teams at various community sites across the state, including one Indian reservation. Each participant spends approximately 20 percent of the program time in training in areas such as team-building, cardiopulmonary resuscitation (CPR), conflict resolution, and self-esteem. Participants are also trained in specific topics related to project needs, including the natural environment, literacy and tutoring skills, tool and construction skills, and health care outreach. Participant training and work site activity are separately recorded for each 40-hour week during the 11-month program period. Participation is monitored daily, and team leaders sign off on time sheets every 2 weeks. Each participant has a mid-program review with his or her supervisor of events completed and scheduled activities required to complete the program commitment. Participants interviewed stated that they plan to apply the learning skills and self-development training they receive in pursuing a college education. Participant program experiences include construction trade and community development work, tutoring students at risk of dropping out of school, and conducting outside school recreational and support services for youth. Many participants felt they were making a visible difference in their communities’ quality of life, such as by building and renovating public facilities and helping other youth. We observed participants working on renovations of abandoned housing, on community facilities such as a farm market and a stadium, and in school classrooms. A town official commented that both the farm market and stadium renovation projects have improved the community and are providing economic and social benefits to citizens. A teacher at another location, whose classroom helper was an AmeriCorps*USA participant, stated that the participant’s work allowed students to get individual help with their particular study problems. YouthBuild Boston is the first replication of the YouthBuild Program that originated in Harlem in 1990. Its AmeriCorps*USA program began operating in October 1994 in Boston. A large, urban program, it received a Corporation grant through the Massachusetts National and Community Service Commission to renovate buildings to provide low-income housing, reduce community environmental hazards, and conduct violence and dropout prevention programs in schools. Its mission is to engage disenfranchised youth in rebuilding their communities and to provide them with the education and skills to become self-reliant and responsible. It applied for AmeriCorps*USA funding to double its size, initiate a part-time YouthBuild Teens program, in which YouthBuild graduates serve as mentors and role models to young teens, and expand its services from housing renovation to include environmental, public safety, and education projects. Per FTE (n=81) Participants’ ages range from 18 to 24. About 75 percent have not completed high school; all are from Boston. Nearly half of the men and 80 percent of the women are young parents, and approximately one-third receive some form of public assistance. While historically one-third of participants have gone on to college, a program official projects that 40 to 50 percent of the participants will use the education award for college or advanced skills training; the two participants with whom we spoke both expected to do so. When the program began in October 1994, it budgeted for 84 full-time and 10 part-time participants; at the time of our visit, 76 full-time and 10 part-time participants were on board. Since that time, a number of participants have left for various reasons, including full-time employment, returning to school full-time, and deciding that the program was not for them. Full-time participants complete a 2-week orientation known as Mental Toughness Training, and 1 week of Department of Labor Occupational Safety and Health Administration safety training before beginning field work at 2 housing projects and 10 vacant lots where they conduct environmental testing. During each 2-week period, participants spend 6 full days in service and 4 days in classes in the morning and in service in the afternoon. While on the job, they learn carpentry skills and how to read blueprints under the supervision of union carpenters. Part-time participants receive training in conflict resolution, mediation skills, and violence prevention. They design and conduct workshops on violence, substance abuse, early pregnancy, and dropout prevention programs for Boston middle school students. Full-time participants receive living allowance stipends of $112.50 per week initially and are eligible to earn bonuses and raises to bring their stipends up to $147 per week. Part-time participants receive stipends calculated at $7 per hour for a 20-hour week. Approximately 75 percent of participants are enrolled in high school equivalency classes. The program combines work and academics through a service learning curriculum in which many classes are project related. For example, math classes are construction or architecture related. Each participant has a formal learning plan, and program staff meet weekly on case management. During our site visits to two housing projects, participants were working in teams under the supervision of union carpenters on the renovation of a two-family home that will become affordable housing and on an abandoned five-story building that will become a transitional dormitory for homeless youth. The Vermont Youth Conservation Corps began operating the Youth Corps - National Service Academy program in January 1995. One of several programs funded by a direct federal agency grant to USDA, it is based at Green Mountain College in Poultney, Vermont. Its mission is to restore, maintain, and manage the Vermont National Forest and community resources, while providing participants with opportunities to apply their formal training and develop leadership skills. The Vermont Youth Conservation Corps applied to USDA for assistance to create a program that combines conservation work, community work, and education to reduce backlogged USDA Forest Service work requests that the Service has been unable to address. Per FTE (n=17) The resources available to the program are 88 percent federal funds and in-kind contributions, and 12 percent private funds. Of its federal funds, less than 1 percent comes from its Corporation grant; the remainder comes from the USDA Forest Service, of which $405,000 is a grant to the Corps under terms of the National and Community Service Trust Act. The program also receives private, in-kind contributions in the form of volunteer efforts, supplies, and administration from Green Mountain College. Participants’ ages range from 18 to about 50. Three-quarters have college degrees in the environmental sciences field and most are seeking applied/practical experience to complement their education and training. A program official projects that all participants will use the education award. The program is budgeted for 20 full-time participants; at the time of our visit, there were 17 full-time participants. Participants are not replaced mid-cycle because training and team formation occur at the program’s outset. Participants complete 2 weeks of classroom training in wilderness characteristics and survival, Red Cross first aid, team-building techniques, the natural sciences, community policy and needs, personal responsibility and self-sufficiency, and an orientation to field work on forest projects. Participants work in two crews that are sent to the northern and southern halves of the state to complete field projects. Projects are typically generated from backlogged USDA Forest Service work requests on recreational facilities and trails; wilderness management; watershed, timber stand, and fisheries improvements; and environmental education and interpretive programs. In addition to field work, participants conduct weekly environmental literature searches; participate in a reading, writing, and discussion curriculum; and undertake environmental education projects for the community. Participants live in a dormitory at Green Mountain College. They receive monthly living allowance stipends totaling $7,625 for the program’s duration. Funds for participants’ meals are deducted from the stipend. Work and training hours are broken down into four components—service, education, training, and other, which includes community environmental projects—and recorded daily. Time sheets are submitted to crew leaders each week and recorded for participant tracking and program cost accounting. During site visits we discussed participants’ efforts with a Forest Service employee and met crew members before their deployment to work sites in Green Mountain National Forest. Participants have the option of pursuing a curriculum worth six upper-level, environmental studies credits at Green Mountain College with a reduced tuition; half have chosen to do so. Below are our responses to specific points raised by the Corporation in its comments. 1. The Corporation raised concerns about our including private cash contributions, in-kind resources, and any state or local government resources in our calculations because in its view these resources represent benefits, not an additional cost to the federal government. Our objective, however, as clearly stated in the report, was not to identify what is or is not a cost or benefit but rather to identify the various resources available to AmeriCorps*USA programs, including all nonfederal funds and other support. In the report, we categorized all resources by their sources. For example, those resources provided by nongovernment sources were labeled as private cash and in-kind contributions. Similarly, in-kind contributions from all levels of government were identified separately from cash contributions. These resources, which are permitted to be counted toward the programs’ required match, included such things as salaries of federal agency personnel who monitored AmeriCorps*USA programs and administered grants, state natural resource managers’ time to supervise participants, and uniforms provided by a local police department. By including these contributions, we were recognizing that they represent resources available to the programs—resources on which those programs depend. 2. The Corporation noted that in our calculations we assumed all participants will use their full education awards. We included the full education award amount as resources available because the Congress appropriates funds specifically for this purpose and funds are held in trust and available to those who earn them for 7 years. Interestingly, in calculating its own cost estimates, the Corporation used the full education award amount, too. Even if we had wanted to predict actual education award usage, there was insufficient experience with the program to date to do so, as the Corporation noted in its response. 3. The Corporation raised three concerns with the methodology we used to develop a calculation of resources per direct service hour: (1) we used the minimum required service hours—1,700 hours; (2) we reduced the 1,700-service-hour figure by 20 percent in calculating direct service hours; and (3) we did not include service hours worked by related but uncompensated volunteers. We used the 1,700-hour figure because it was the minimum established by law and participants were only required to attain it, not exceed it. Although at the time of our site visits comprehensive data were not available on completed service hours, the data we were able to collect on participants indicated that while some would exceed the required 1,700 hours, many others would need to put in extra hours to meet the requirement. Regarding the specific information the Corporation provided on the Washington Service Corps in an enclosure to its letter, the average of more than 1,800 hours worked may represent the experience only of early completers. As we stated in the report (see p. 9), early information on those completing the program may not be indicative of results of programs that are still under way. Information from our site visits indicated that when considering all participants, it is likely the average will be closer to 1,700 hours. We reduced total service hours by 20 percent in calculating direct service hours because the Corporation allows programs to allocate that amount of time to education, training, or similar activities. Several programs we visited were on track to spend 20 percent of their service time on such activities. Recognizing that the Corporation does not yet have actual data on the portion of time spent on nondirect service, we used the allowed amount in our calculations. In calculating resources per direct service hour, we included all resources, including those used on nondirect activities such as training, because those activities are an integral part of the program and required by legislation. The Corporation, in its comments, also recognized that nondirect service is a required part of the program. It follows that resources spent on nondirect service need to be included when analyzing resources per direct service hour. At the specific program the Corporation mentioned as devoting less than 20 percent of participants’ time to nondirect service (the Washington Conservation Corps), formal training was expected to be about 10 percent of the required hours. However, that figure does not include additional informal training at worksites. In our calculations of resources per service hour, we did not include, as the Corporation suggested we should, hours worked by uncompensated volunteers who were not AmeriCorps*USA participants. Because our objective was to identify the resources needed to field an AmeriCorps*USA participant, including hours of service generated by other volunteers was not relevant. It should be noted that the Corporation in its own estimates of cost per participant did not include such volunteers. 4. The Corporation expressed concern that our estimate of $31,000 in available resources per participant at federal agency grantee programs was higher than its estimates. The Corporation suggested that our calculations did not discount resources for participant attrition. It provided data to us that showed an estimate of $27,600 per participant calculated by using programs’ budgeted resources and expected numbers of participants—data it obtained from grant files. Unlike the Corporation’s estimate, ours was based on resources programs were certain to receive and actual numbers of participants that programs reported about halfway through the program year; we did adjust our estimate for participant attrition. Because the Corporation used expected participants, a higher figure than the actual number, its resources-per-participant figure was lower than ours. The example the Corporation presented (which it identified as the Vermont Anti-Hunger AmeriCorps program but whose figures related to the Vermont Youth Corps - National Service Academy program) indicated that the grantee, the U.S. Department of Agriculture (USDA), will reduce the resources it contributes to the program to reflect attrition. However, Youth Corps officials reported to us in June 1995 and confirmed in July 1995 that they expected to receive the full $450,000 in cash from USDA despite attrition. 5. The Corporation expressed several methodological concerns: (1) that many of the data we used were self-reported, (2) that programs may have overstated the resources they hoped to raise, and (3) that we “expensed” start-up costs and capital costs in the first year. As to the first point, we used the only data available—those which we obtained from the programs themselves—because the Corporation did not have reliable data available. As to the second point, the data programs reported do not appear to be overestimated. In fact, the amount of non-Corporation resources the programs we sampled told us they expected to actually receive for the program year totaled only about 93 percent of the resources these programs’ grant applications indicated that they would raise. As to the third point, in our report, we clearly stated why we chose to treat start-up costs as we did (see p. 8). Interestingly, the Corporation in its cost estimates included all expenses programs were expected to incur in the first year—including start-up costs—without allocating them over future periods. 6. The Corporation also was concerned about our inclusion of all resources available to programs when some may not be used and suggested that we should adjust the non-Corporation resources for participant attrition. We disagreed. We obtained our information in May 1995, late enough in the programs’ operating year for them to predict available resources and participant levels. (In contrast, the Corporation’s funding was committed to the programs at the start of the operating year before any participation rates were known; therefore, we chose to adjust the amount of these funds.) 7. The Corporation stated that it is not certain that all of the resources in the “state and local” category are from public sources. Evidence we obtained showed that about 90 percent of the resources were clearly from public sources; the remaining 10 percent were from public postsecondary institutions. Given the reliance of public postsecondary institutions on public support, it was therefore appropriate in our view to categorize the resources they contributed to AmeriCorps*USA programs as public. In any event, our sensitivity analysis showed that categorizing resources from public postsecondary institutions as private resources had very little impact on our estimate (see app. III). In addition to those named above, the following individuals made important contributions to this report: C. Jeff Appel, Nancy K. Kintner-Meyer, and James W. Spaulding assisted in collecting and analyzing the data and writing the report; Jill W. Schamberger helped administer the data collection instrument; Susan C. Donna verified site visit information; Edmund L. Kelley helped conduct site visits; Lena G. Bartoli wrote computer programs to aid in our analysis; Steven R. Machlin performed statistical analyses and provided methodological advice on sampling errors; and Leslie D. Albin did the editing. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the public and private resources being used to support the Corporation for National and Community Service's AmeriCorps*USA program, focusing on: (1) the amount of funds and in-kind contributions used to support program participants; (2) the per-participant and per-service-hour allocation of program resources; and (3) program objectives, anticipated benefits, and achievements to date. GAO found that: (1) for program year 1994 to 1995, Corporation resources available per program participant totalled $17,600, while total resources per participant averaged about $26,654; (2) over one-third of the program's financial resources came from sources outside of the Corporation, mostly from other federal agencies and state and local governments; (3) private sector contributions accounted for about 12 percent of the program's total available resources; (4) most of the Corporation's funding for program projects went to providing operating grants and education awards; (5) per-participant resources were lower for programs run by nonfederal organizations than those funded by federal agencies; (6) total available resources per-service-hour amounted to about $16; (7) Congress intended for the program to help communities address their unmet human, educational, environmental, and public safety needs; (8) the program has achieved a variety of results that support its goals; and (9) the programs reviewed were designed to strengthen community ties and spirit, develop civic responsibility, and expand educational opportunities for program participants and others.
The Results Act is designed to improve the efficiency and effectiveness of federal programs by establishing a system to set goals for program performance and to measure results. Specifically, the Act requires executive agencies to prepare multiyear strategic plans, annual performance plans, and annual performance reports. The strategic plan serves as the starting point and basic underpinning of the performance-based management system and includes the agency’s mission statement and its long-term goals and objectives for implementing the mission. Treasury submitted its first strategic plan under the Results Act to Congress and the Director of OMB, as required, by September 30, 1997. The annual performance plan links the agency’s day-to-day activities to its long-term strategic goals. The first plans, covering fiscal year 1999, were submitted to OMB in the fall of 1997 and to Congress after the President’s budget in February 1998. Finally, the first annual performance reports for fiscal year 1999 are due to Congress and the President no later than March 31, 2000. Performance reports are to include, among other things, an evaluation of the agencies’ progress toward achieving the goals in their annual plans. These reports are to provide feedback to federal managers, policymakers, and the public on the results achieved each year. The Treasury Department has responsibilities in key governmental roles, including tax administrator, revenue collector, law enforcer, and financial manager. Treasury also formulates and recommends economic, financial, tax, and fiscal policies and manufactures coins and currency. To carry out its diverse responsibilities, Treasury houses more than a dozen bureaus and offices. For its fiscal year 1999 budget, Treasury requested about $12.301 billion and about 147,900 full-time equivalent (FTE) staff years. Public sector organizations, like Treasury, are faced with demands to be more effective and accountable for the results of their programs. To meet such demands, Treasury began moving toward a performance-based approach to management before the Results Act requirements became mandatory. This is the third year that Treasury has included performance goals derived from its strategic plan in its budget request. Treasury’s fiscal year 1999 performance plan under Results Act requirements is combined with its budget request and includes reports on performance goals for the past 2 fiscal years. As the Results Act requires, the annual performance plan is to provide a basis for an agency to compare actual results with performance goals. To do this, the agency needs to set goals and develop appropriate performance measures and show how it will use them to assess performance across the agency. By showing the relationship between the annual performance goals and the agency’s mission and strategic goals, an agency’s performance plan can demonstrate how the agency intends to make progress toward the achievement of its strategic goals. An agency’s performance plan should also reflect and discuss the crosscutting nature of its programs and how they will contribute to achieving performance related to crosscutting functions. Treasury’s performance plan does not provide a succinct and concrete statement of expected performance for subsequent comparison with actual performance for several reasons. First, many of the annual performance goals in Treasury’s plan are necessarily abstract and not directly measurable. IRS, for example, has established three performance goals—improve customer service, increase compliance, and increase productivity—for defining its intended performance. Each of these broad goals is complemented with program-level measures to assess progress toward achieving the three goals. IRS’ performance goal relating to improving customer service is particularly difficult to quantify because achieving it implies that IRS can measure and reduce taxpayer burden. IRS currently does not know how to realistically measure taxpayer burden. Reliable data for measuring burden do not exist because taxpayers normally do not track the time they spend complying with their tax and filing obligations. IRS recognizes the limitations of these goals on defining its performance and is looking for alternatives. Because reducing taxpayer burden affects IRS’ ability to achieve its performance goals and IRS’ measure of taxpayer burden is not based on reliable data, its performance measures based on burden may not be very useful. However, devising ways to measure the burden that IRS influences and developing reliable measures of taxpayer burden and the impact of IRS’ programs on burden will be challenging. IRS is not alone; Treasury as a whole faces similar challenges. Second, the quality of some measures in Treasury’s plan could be improved so that they directly relate to the performance goals. The relationship between some measures and goals is not clear, making it difficult to define the level of expected performance. Also, the measures do not always appear to cover key aspects of performance. Examples from OCC’s plan illustrate this. One of OCC’s strategic goals is to “improve the efficiency of bank supervision and reduc burden by streamlining supervisory processes.” This strategic goal has three performance goals, and each has a single indicator or measure. One such performance goal is to “continue with the regulatory reinvention process to improve efficiency and reduce unnecessary burden.” The single measure in the plan for this goal is “percentage time meeting the application processing time frames,” with a performance target of 95 percent in calendar year 1998. This measure only addresses application processing time frames and does not clearly relate to the goal of continuing with the regulatory reinvention process. OCC has two measures for its performance goal to “support efforts to foster a national bank charter that will effectively compete with other financial service providers and continue to meet the financial service needs of all types of customers.” However, these measures—“rating on customer satisfaction in connection with the licensing process” and “average processing time for analysis of customer complaints”—do not clearly relate to the performance goal. Third, Treasury’s plan is also incomplete in that some of the performance measures for its bureaus and offices are still being developed and defined. For example, many IRS measures are coded “TBD,” or to be determined. For these proposed measures, IRS does not have complete information, such as definitions, data sources, level of detail, and data reliability. During fiscal year 1998, IRS is working with OMB, a contractor, and others to develop a balanced scorecard measurement system that is to evaluate IRS on customer satisfaction, employee satisfaction, and business results. Finally, many of the measures in Treasury’s plan are output measures. While output measures are expected to be in the plan, Treasury could better convey its expected results and show how goals are to be achieved by developing additional outcome measures and better explaining how the outputs that are measured relate to the goals. For the most part, the performance goals of the Department’s bureaus and offices are connected to their missions, strategic goals, and program activities in the budget request. Specifically, the plan contains tables that align the Departmentwide strategic goals, bureau missions and strategic goals, and performance goals and measures. However, the linkages between the program-level measures and performance goals are not consistently clear. IRS, for example, has tables that show the linkage between its strategic goals or objectives and the Departmentwide strategic plan and its performance goals and annual performance measures. However, the plan does not discuss how the intended results of its many performance measures will be assessed to indicate IRS’ success in achieving its performance goals. For example, the number of individual refunds issued, paper processing accuracy rate, and number of calls answered are 3 of the 19 performance measures under the goal to improve customer service. The plan does not explain how any of these measures should be rolled up to indicate progress toward achieving the customer service goal. We recognize the difficulty IRS faces in explaining this, especially since its performance goals are necessarily abstract and not directly measurable. However, some discussion of how IRS plans to evaluate progress toward achieving its performance goals would help explain how the results of its performance measures affect the attainment of its performance goals. Although Customs’ plan provides information to align its strategic goals and performance goals, the information is not consistent. Customs’ plan has a table that shows the linkage between its strategic goals and performance goals. Later in the plan, other tables show the exact same strategic goals as performance goals; and what are shown in the earlier table as performance goals are now called performance measures. The Results Act requires that annual performance plans identify annual performance goals that cover all the program activities in the agency’s budget. Treasury’s plan complies with this requirement, as each component and major office generally has one or more performance goals for each of the budget activities in the budget request. For one new IRS budget activity relating to the earned-income tax credit compliance initiative, the plan listed one performance goal—overclaim rate—but the definition and targets for the goal have not yet been determined. Also, IRS’ budget activity, “Modernization Investments,” did not list any performance goals. However, the plan noted that the performance measures are discussed in a separate document relating to modernization proposals. Treasury’s performance plan could be improved if it better addressed the crosscutting nature of its programs and how they will contribute to achieving performance related to crosscutting functions. Specifically, we found that Treasury’s annual performance plan generally did not identify performance goals that reflect activities being undertaken to support crosscutting programs, and the plan does not consistently address the crosscutting nature of its programs. Treasury has responsibilities for functions and issues that involve other agencies. As such, its plan should indicate how Treasury will coordinate those programs with other federal programs having related strategic or performance goals. In crosscutting program areas, Treasury should present output goals and intermediate outcome goals that would clarify its contribution to the intended outcomes of the crosscutting program. This information would be helpful to Congress and other stakeholders in identifying areas in which agencies should be coordinating efforts to efficiently and effectively meet national concerns. A focus on results, as envisioned by the Results Act, implies that federal programs contributing to the same or similar outcomes should be coordinated to ensure that goals are consistent and that program efforts are mutually reinforcing. Customs, for example, is involved in several crosscutting activities—drug interdiction, counterterrorism, and investigations of money laundering. These activities are recognized in Customs’ plan as crosscutting activities, but there is no clear evidence in the plan that its fiscal year 1999 performance goals have been coordinated with other agencies. The plan does mention some past coordination efforts—such as between Customs and the Office of National Drug Control Policy to develop measures for a strategy to reduce the supply of narcotics. The plan did not clearly discuss the results of those efforts or indicate whether Customs’ fiscal year 1999 performance goals were based on them. However, Customs’ plan does mention coordination efforts with the Immigration and Naturalization Service and the Department of Agriculture in establishing performance goals to improve customer service when processing passengers through ports of entry. ATF’s plan recognizes the role of other law enforcement agencies in achieving the goals of contributing to a safer America, and the plan mentions partnerships with various law enforcement agencies to achieve its goals. However, the plan does not clearly indicate that ATF coordinated with the other agencies in setting its fiscal year 1999 annual goals or targets. FMS states that one part of its mission focuses on efforts to increase the collection of delinquent debts owed the federal government and that its success is achieved through such activities as providing debt collection and management services to all federal agencies and developing and implementing governmentwide debt management policies. The debt collection program activity in FMS’ plan, for example, has a measure on the percentage of market share of federal agencies with debt servicing requirements that have referred their debts to FMS as required by the Debt Collection Improvement Act of 1996 and another measure to increase governmentwide delinquent nontax debt collections over the fiscal year 1995 baseline. However, FMS does not provide any information to show how it plans to coordinate with other agencies to achieve these goals. IRS plays a role in administering tax code provisions pertaining to several billions of dollars in tax expenditures, such as the earned-income tax credit, the low-income housing credit, and the research credit, and there is no discussion of these crosscutting programs. IRS, too, shares responsibilities with other agencies, such as the Social Security Administration (SSA), in processing and reconciling information on employee wages and social security benefits, but the plan does not explicitly discuss or describe whether any performance goals were coordinated with SSA or other agencies. Conversely, IRS’ plan does state that its narcotics conviction rate is dependent upon prosecutions within the Department of Justice and that national priorities for criminal investigations are determined, in part, by Justice. The Results Act requires that annual performance plans briefly describe the strategies and resources the agency intends to use to achieve its performance goals. We found that Treasury’s performance plan adequately discusses, with some exceptions, the resources to support the achievement of its performance goals. The usefulness of the plan, which includes the budget justification, would be enhanced with a fuller description of how its strategies relate to achieving the goals. Strategies to facilitate achieving performance goals include activities such as administrative processes, training, and the application of technology and efforts to improve efficiency and effectiveness through approaches such as reengineering work processes. We found that the information in Treasury’s plan on how strategies were connected to results did not always list strategies and, in other cases, did not adequately describe the strategies. The plan also does not consistently discuss how the strategies will help the Department achieve its goals. IRS provides an example where strategies relating to its goal to “improve customer service” were clear and complete. The plan lists nine strategies to enhance customer service and eight customer service standards for related products and services. The strategies and standards include improving the clarity of notices, forms, and tax publications; increasing the hours for its telephone service; opening district offices on Saturdays during the filing season; providing additional telephone assistance to small businesses; and creating citizen advocacy panels. The descriptions of the strategies are succinct; and they outline methods that, if followed, should enhance customer service. In contrast, Customs’ plan provides only a partial description of the strategies it expects to use in fiscal year 1999 to achieve its projected results. For example, Customs indicates that it plans to improve drug interdiction results by focusing attention on areas of increased vulnerability, exploiting intelligence leads, and improving technology. However, Customs offered no strategies for its goals in the revenue-producing and antimoney-laundering areas. In addition, OCC’s plan does not fully describe strategies to achieve its performance goals. Those goals included general references to an approach, such as streamlining, but OCC did not provide detailed strategies for achieving the goals. In some cases, regulatory requirements were mentioned as a means for achieving goals. Although the Act does not require agencies’ annual performance plans to disclose how external factors might affect performance and results, including this information in the plans would enhance their overall usefulness as it would more fully describe Treasury’s potential to achieve the expected performance. Treasury’s strategic plan did mention some of the external factors that may affect its ability to achieve its strategic goals. In our opinion, Treasury’s performance plan could be improved by more explicitly addressing how external factors may affect strategies and intended results and discussing how it will mitigate or use the identified conditions to achieve its performance goals. With some exceptions, the Treasury plan adequately discusses the resources the Department will use to achieve its performance goals. In addition to information on dollar amounts and staffing levels, the plan frequently explains how the resources that Treasury is requesting specifically contribute to one or more performance goals. For example: The IRS plan notes that its goals for improving the accuracy and timeliness of tax return processing depend largely on the agency’s ability to use or acquire four specific information systems. The IRS plan also notes that the accomplishment of its performance goal of “$290 million in increased collections” is contingent upon completing the rollout of the Integrated Collection System to its district and international offices and obtaining an additional 57 FTEs to expand office hours and conduct problem-solving days. The Customs plan explains that continuing the acquisition and installation of the Land Border automation equipment is needed to allow inspectors to perform more careful screening and questioning of vehicle occupants, which should help to achieve Customs’ goal of improving its efficiency at targeting arriving vehicles for enforcement purposes. The ATF plan explains that expanding its youth crime gun interdiction initiative, including providing additional agents for the program, would (1) “provide comprehensive crime gun tracing by State and local law enforcement”; (2) “provide rapid, high volume crime gun tracing and crime gun market analysis by the National Tracing Center (NTC)”; and (3) “train ATF, State, and local law enforcement personnel.” As described, the requested dollars and staffing would seem to contribute to achieving ATF’s performance targets for the number of persons trained, the number of traces, and the average trace response time. Treasury’s plan could be improved in some areas, however, with a more thorough discussion of the resources required to achieve its performance goals. For example, in the FMS plan, the resources needed for accomplishing the performance goals are not always evident. One of the measures in the “Payments” program activity, for example, relates to increasing the number of states in which the direct federal electronic benefits transfer system is available. However, the FMS plan does not indicate the resources FMS intends to use to accomplish this measure. Treasury’s performance plan does not consistently address the use of information technology (IT) resources to achieve performance goals across its bureaus and offices. The Departmental Offices’ performance plan includes a goal to “pursue and maintain fully integrated financial systems Departmentwide by standardizing core financial information into a Departmental data warehouse.” However, the plan does not include any strategy or approach to achieve this goal. Similarly, one of Customs’ goals is to “maximize trade compliance through a balanced program of informed compliance, targeted enforcement actions, and the facilitation of complying cargo.” However, in its description of its strategy to meet this goal, Customs does not mention its major initiative to automate its commercial operations, known as the Automated Commercial Environment, or describe how this system will help achieve the goal. Treasury’s performance plan does not provide sufficient confidence that its performance information will be credible because it does not adequately describe procedures for verifying and validating performance data or sufficiently discuss the ramifications of known data limitations. The Results Act requires performance plans to describe the procedures an agency will use to verify and validate its performance measures. The descriptions of the procedures should also identify any significant data limitations and discuss the impact they may have on the credibility of the performance information. Treasury’s performance plan does not adequately discuss procedures for verifying and validating performance information that will ensure that it is sufficiently complete, accurate, and consistent. Several of Treasury’s bureaus propose to use data from various information systems to measure performance; but the plan does not adequately discuss system controls or procedures for ensuring the reliability, integrity, and security of the data. Specifically, IRS often uses short descriptions, such as “excellent,” “good,” and “low,” to describe the reliability of data for its performance measures. These descriptions and other information on IRS’ measures do not adequately explain what general procedures are to be used to control data quality and ensure accuracy. For example, IRS describes the reliability of data it plans to use from its Criminal Investigation Management Information System to determine its narcotics and fraud conviction rates as “excellent.” However, IRS’ performance plan does not describe procedures for verifying the accuracy and completeness of the data. IRS indicates that the data needed to determine the narcotics and fraud conviction rates come from the Department of Justice, an external source, but it does not comment on the credibility of Justice’s data or its own data even though it is aware that the credibility of the IRS data has been questioned by a private research group. In the past, we have identified obstacles IRS and Customs face as they attempt to measure the performance of their programs. One area of concern has been IRS’ inability to adequately measure the performance of some of its programs because of the lack of reliable data to measure such key indicators as taxpayer compliance and burden. We have raised concerns that some of IRS’ program-level performance indicators need to be balanced with indicators designed to measure whether taxpayers are treated properly. Concerning Customs, we have pointed out that the agency has traditionally measured the success of its drug interdiction efforts by the resulting number of seizures, arrests, indictments, and convictions. These measures do not sufficiently cover key aspects of performance. In addition, it is not clear whether an increase in seizures indicates that Customs has become more effective or that the amount of smuggling has increased and Customs is still seizing the same percentage of drugs. Data limitations can affect the credibility of performance information. Treasury’s performance plan falls short in identifying data limitations and their implications for the reliability of the performance information. The Departmental Offices propose to use the dollar value of U.S. exports of goods and services to measure progress toward a goal to “facilitate legitimate trade, enhance access to foreign markets, and enforce trade agreements,” but the plan does not acknowledge any limitations in the data from the Department of Commerce. Customs’ plan does not discuss additional efforts that are needed to ensure the credibility of the data by which Customs’ performance is to be judged. This is important in several of Customs’ programs because one of its performance measures is the accuracy of key trade statistics, and we have noted Customs’ inability to generate reliable trade data. Customs has also expressed concerns about its ability to generate reliable trade data. Its fiscal year 1997 trade compliance measurement report states that “Concerns remained for the improper classification of goods by importers potentially hindering enforcement activity and skewing trade statistics.”Because some of Customs’ measures depend on narrative assessments based on input from informant or intelligence operations (e.g., money-laundering systems disrupted and changes in drug-smuggling organizations’ behavior), the plan could be improved by briefly describing efforts to ensure that the data are credible. Further, Customs’ plan does not specifically mention weaknesses related to ensuring that sensitive data maintained in its automated systems are adequately protected from unauthorized access and modification and that its core financial systems capture all activities that occurred during the year and provide reliable information for management to use in controlling operations. These weaknesses could affect the reliability of Customs’ performance data. The FMS plan does not adequately identify weaknesses in computer controls that could affect the reliability of data used to measure performance. For example, based on our ongoing work on the central banking function of FMS, which includes the payment and collection activities, we identified weaknesses in the general controls over some of FMS’ computerized information systems that process receipts and disbursement information for the government. These controls did not provide adequate assurance that data files and computer programs were fully protected from unauthorized access and modification. When we commented on Treasury’s strategic plan, we said that it could be improved by explicitly addressing the Department’s capacity to measure progress toward achieving its goals. We also said that developing measures and collecting reliable data for some important areas of Treasury’s performance, such as taxpayer burden, are very difficult to do. These issues are still concerns to us as Treasury’s performance plan does not adequately discuss the strategies the Department plans to use to ensure that its measures of program performance are reliable and that they will improve accountability and support decisionmaking. These are challenges that Treasury faces as it strives to better meet the criteria set forth in the Results Act and related guidance. We realize that these challenges are difficult and that some measures and data, such as those pertaining to burden and compliance, will take more time than others to develop. However, in such instances, Treasury may need to devise and communicate the interim plans it will use to measure performance in these critical areas. We believe that Treasury’s plan could be enhanced by explicitly discussing the Department’s strategy to improve its performance measurement systems and data. Treasury’s plan should also include annual performance goals for efforts to address its major management challenges. We believe that Treasury’s plan could be improved by including performance goals to address the significant management challenges and high-risk areas the Department faces. We found that the Treasury plan does not have performance goals that adequately address the eight high-risk areas we previously identified that affect Treasury operations. For example, one governmentwide high-risk area for Treasury is ensuring that its computer systems will function properly after the century date change, yet only two bureaus—OCC and OTS—include specific performance goals related to the year 2000 computer date-change issue. The Departmental Offices’ plan has a year 2000 goal for Treasury’s systems in general and IRS’ and FMS’ plans did acknowledge that the computer date change is a management issue. Some of the other major management challenges that Treasury faces are briefly acknowledged in the bureaus’ and offices’ plans. Treasury’s plan mentions the need to implement the Clinger-Cohen Act requirements. To fulfill these requirements, the Departmental Offices’ plan has a Treasurywide goal that calls for establishing IT investment controls. The plan has one related “measure” for the goal, which is “establishing IT investment controls and ensuring Treasury and all bureaus have established investment review boards with defined, repeatable processes for project selection.” However, the plan does not include any discussion of strategies for achieving this goal or how performance data will be used to demonstrate improvements to agency programs. Further, none of the bureaus’ plans we reviewed in depth had related performance goals for establishing IT investment controls. This is a very important element of any investment strategy and the purpose for establishing it at the Department level. To ensure that all ongoing and new IT projects are considered by the investment review boards, each of Treasury’s bureaus and offices should have performance goals that address IT investment controls in their respective plans. Treasury’s plan also mentions the requirement in the Government Management Reform Act of 1994 (P.L. 103-356) that the Secretary of Treasury is to prepare audited consolidated financial statements (CFS) of the federal government beginning in the spring of 1998. FMS, which is responsible for preparing the audited cfs, revised the program activities in its fiscal year 1999 budget creating one on governmentwide accounting and reporting that covers the cfs requirement. For this activity, FMS’ has one goal—to make the federal government a model for financial management—and four related measures such as the percent of agency reports for the cfs processed by FMS within the established range for accuracy. However, there is no discussion of how the 1999 proposed targets for the four performance measures relate to being a model for financial management. The Results Act seeks to improve the management of federal programs by shifting the focus of decisionmaking from staffing and activity levels to the results of federal programs. Annual performance plans, as required by the Act, should establish linkages between the long-term strategic goals outlined in agencies’ strategic plans and their day-to-day program activities. Treasury’s annual performance plan appropriately links its annual performance goals and measures to its strategic goals. Although, the plan provides useful information for congressional decisionmakers and other stakeholders, it did not fully present information that reflects the intended performance across the Department, describes how strategies relate to attainment of goals, and assures readers that performance results and data are credible. The plan we reviewed was Treasury’s first one under the Results Act. Developing a plan that fully meets all the criteria of the Act and related guidance will be a challenge because developing measures and collecting reliable data for some important areas of Treasury’s performance, such as taxpayer burden, are very difficult to do. Treasury’s plan could be enhanced by explicitly discussing the Department’s strategy to improve its performance measurement systems and data and by describing Treasury’s interim plans to measure performance in critical areas. On May 28, 1998, we obtained oral comments from Treasury’s Director of the Office of Strategic Planning and his staff on a draft of this report. They said that Treasury generally agreed with our analysis and provided comments to clarify its position. The officials said that Treasury’s fiscal year 1999 performance plan—the first such plan required by the Results Act—is not its first plan. According to the officials, Treasury has published performance plans in the past and has publicly reported its performance results against the plans for fiscal years 1996 and 1997, ahead of the Act’s requirements. Treasury agreed with our concerns about the validity of its performance data. Treasury pointed out that the validity of its performance data and its capacity to regularly and accurately report on performance are key challenges it needs to address. To this end, they said Treasury’s Office of Inspector General is planning to identify critical information systems for inclusion in its annual evaluation work plans; Treasury’s bureaus are continuing to identify and report where data are of questionable reliability; and the Department is developing a performance reporting system to routinely report the results of performance. In the draft of this report that Treasury reviewed, we said that a fuller description of strategies to achieve goals would be beneficial. Treasury said that to keep the plan focused and useful, a balance is needed on the amount of detailed information provided in the plan. Further, Treasury said that since its plan is incorporated in its budget request, congressional stakeholders can explore specific strategies of interest during hearings and follow-up questions. We agree that balance is needed in the amount of detailed information provided in the plan. At the meeting, we clarified that the plan did not always list strategies or adequately describe them. We revised our report to reflect this, and we also made other technical changes on the basis of Treasury’s comments where appropriate. We will send copies of this report to the Chairman and Ranking Minority Members of interested congressional committees; the Director, Office of Management and Budget; and other interested parties. Copies will also be made available to others on request. This report was prepared under the direction of Charlie W. Daniel, Assistant Director. Please contact me or Mr. Daniel on (202) 512-9110 if you or your staff have any questions concerning this report. Tax Administration: IRS Faces Challenges in Measuring Customer Service (GAO/GGD-98-59, Feb. 23, 1998). Managing for Results: Agencies’ Annual Performance Plans Can Help Address Strategic Planning Challenges (GAO/GGD-98-44, Jan. 30, 1998). The Results Act: Observations on the Department of the Treasury’s July 1997 Draft Strategic Plan (GAO/GGD-97-162R, July 31, 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of the Treasury's fiscal year (FY) 1999 annual performance plan, focusing on: (1) the extent to which Treasury's performance plan provides a clear picture of intended performance across the agency; (2) how well Treasury's performance plan discusses the strategies and resources it will use to achieve its performance goals; and (3) the extent to which the Treasury's performance plan provides confidence that its performance information will be credible. GAO noted that: (1) Treasury's FY 1999 annual performance plan partially meets the criteria set forth in the Government Performance and Results Act and related guidance; (2) one of the strengths of the plan is that the annual performance goals and measures are linked to the strategic goals in the bureaus' and offices' strategic plans; (3) moreover, the plan generally provides a clear connection between its performance goals and the program activities in Treasury's budget request; (4) with a few exceptions, the plan covers each of Treasury's program activities as required by the Results Act; (5) the plan could be improved to better meet the criteria set forth in the Results Act and related guidance by presenting information on performance goals and measures in a manner that would better reflect intended or expected performance and achievements; (6) while GAO recognizes that some output measures are necessary, it also believes the plan could define Treasury's expected performance better if it had more outcome goals and measures; (7) also, the plan does not consistently include information across Treasury's bureaus and offices on how the Department plans to coordinate its activities that share a common purpose with activities in other agencies; (8) the plan, which includes the budget justification, describes the resources for carrying out the strategies to meet the criteria set forth in the Results Act and related guidance; (9) however, the information in the plan on how the strategies relate to achieving the goals did not always list strategies or adequately describe them; (10) additional details on how Treasury plans to verify and validate performance data, along with some discussion of how the effects of data limitations are to be handled, would better assure Congress and other stakeholders that the intended performance or results, if achieved, are credible; (11) GAO realizes that developing measures and collecting reliable data for some important areas of Treasury's performance are very difficult to do; (12) however, Treasury's plan could be enhanced by explicitly discussing the Department's strategy to improve its performance measurement systems and data and by describing Treasury's interim plans to measure performance in critical areas; (13) Treasury's plan would be more useful to Congress and other stakeholders if it included performance goals to address the significant management challenges and high-risk areas it faces; and (14) the plan briefly acknowledges some of the major management challenges and high-risk areas, but it does not have performance goals that adequately address all of them.
Strategic workforce planning—an integral part of human capital management—is an iterative, systematic process that helps organizations determine if they have staff with the necessary skills and competencies to accomplish their strategic goals. We have previously reported that having the right workforce mix with the right skill sets is critical to achieving DOD’s mission, and that it is important for DOD, as part of its workforce planning, to conduct gap analyses of its critical skills and competencies. Since 2001, GAO has included strategic human capital management as a government high-risk area. In 2002, we reported that DOD recognized that human capital strategic planning is fundamental to effective overall management. Further, we reported that DOD was working to identify and address problems that have been hampering this effort, which included a lack of accurate, accessible, and current workforce data; mature models to forecast future workforce requirements; a link between DOD’s planning and budgeting processes; and specific planning guidance. In 2015, we reported that DOD has demonstrated sustained leadership commitment to address its acquisition workforce challenges, underscored by the department’s emphasis on growing and training the acquisition workforce through its Better Buying Power initiatives. As part of our larger body of work examining human capital management issues, we and the Office of Personnel Management have identified six key principles of strategic workforce planning that organizations should incorporate in their processes, including: aligning workforce planning with strategic planning and budget formulation; involving managers, employees, and other stakeholders in planning; identifying critical gaps and competencies and analyzing workforce gaps; developing workforce strategies to address gaps in numbers, skills, building the capabilities needed to support workforce strategies through steps that ensure the effective use of human capital flexibilities; and monitoring and evaluating progress toward achieving workforce planning and strategic goals. Several offices within DOD play key roles in strategic planning activities, such as determining the size and makeup of the acquisition workforce, budgeting for the workforce, assessing workforce competencies, and addressing skill gaps (see table 1). Pursuant to the Defense Acquisition Workforce Improvement Act, DOD identified 13 career fields and designated acquisition-related positions held by military or civilian personnel. DOD also established certification requirements that included education, training, and experience elements for each acquisition position. Certification is the procedure through which DOD components determine that an employee meets these requirements for one of three levels in each acquisition career field. DOD components fund their acquisition workforce personnel through a variety of accounts, including (1) operation and maintenance; (2) research, development, test and evaluation; and (3) working capital. Additionally, Congress established the Defense Acquisition Workforce Development Fund in 2008 to provide funds dedicated to recruiting, training, and retaining the acquisition workforce. There are two boards that oversee acquisition workforce programs, including the Fund and related initiatives. The Senior Steering Board is expected to meet quarterly and provide strategic oversight, while the Workforce Management Group is expected to meet bimonthly and oversee the fund’s operations and management (see table 2). DOD has increased the size of its workforce since September 2008, exceeding its 20,000 personnel growth target by over 7,000 as of March 2015. The growth varied within individual components. For example, the Air Force and Navy, as well as the other DOD agencies collectively, have more acquisition personnel now than in fiscal year 2008. Conversely, the Army has experienced an 8 percent decrease in the size of its acquisition workforce since fiscal year 2008 due to Army-wide cost savings measures that have impacted the size of the civilian workforce. The reported increase in DOD’s acquisition workforce was accomplished through hiring of additional personnel, converting functions previously performed by DOD contractors to performance by DOD civilian personnel (referred to as insourcing), adding military personnel to the acquisition workforce, and administratively recoding existing personnel, the last of which does not result in an increase in DOD’s workforce capacity. Shortfalls, however, exist in certain career fields. For example, 6 of DOD’s 13 acquisition career fields, including 3 priority career fields— contracting, engineering, and business—did not meet growth goals. DOD increased the size of its military and civilian acquisition workforce by 21 percent, from about 126,000 to about 153,000, between September 2008 and March 2015. This equates to a nearly 27,000 personnel increase and 7,000 personnel more than identified in DOD’s April 2010 acquisition workforce plan. Figure 1 shows that most of this growth occurred in fiscal years 2009 and 2010. Acquisition workforce growth outpaced losses between fiscal years 2008 and 2012. However, in fiscal years 2013 and 2014, the department experienced a small decrease. During this time, DOD components were faced with sequestration and other cost-cutting measures. In preparation for these cuts, the Under Secretary of Defense for Acquisition, Technology and Logistics issued a memorandum in September 2012 that specified that DOD components were to take a strategic view in workforce decisions and protect the rebuilding investments, especially in light of ongoing contractor support reductions. For the most part, components were able to sustain the acquisition growth levels they had already achieved, with the exception of the Army, which lost several thousand personnel. In fiscal year 2015, each component again experienced an increase, with the exception of the Army. According to Army Defense Acquisition Career Management officials, the Army’s cost-cutting efforts have affected all aspects of Army operations, including acquisition. Since 2008, the number of overall Army military and civilian personnel has decreased by 91,000, or 12 percent, from 790,000 to 699,000. The acquisition portion of this reduction was almost 3,400 personnel. The department grew the acquisition workforce through a combination of hiring and insourcing actions, as originally planned, as well as by adding military personnel and administratively recoding existing positions. Figure 2 shows how DOD increased the acquisition workforce through fiscal year 2014, the most recent year for which DOD could provide complete data. About 72 percent of the workforce growth has been achieved through hiring new civilian employees, with more than half of this increase attributable to funds provided through the Defense Acquisition Workforce Development Fund. Since 2009, DOD has spent about $1.8 billion from the fund to recruit and hire about 10,400 new civilian employees. DOD used the majority of the funding to pay their salaries for a 2- to 3-year period, after which the components fund the personnel through their own budget accounts. The hiring actions generally were in the career fields that DOD indicated were a priority in the 2010 acquisition workforce plan. For example, about three-quarters of all Defense Acquisition Workforce Development Fund hiring was targeted toward five priority career fields— contracting, business, engineering, program management, and auditing— identified in DOD’s April 2010 acquisition workforce plan. DOD components used their own funds to hire the balance of the new civilian and military employees identified in the figure above. Overall, insourcing accounted for about 14 percent of the workforce growth. DOD originally planned to insource 10,000 contractor positions for the acquisition workforce by fiscal year 2015. Components insourced about 3,400 positions prior to a March 2011 revision to DOD’s insourcing policy. According to the memorandum revising the policy, which was issued jointly by the Under Secretary of Defense for Acquisition, Technology and Logistics and the Under Secretary of Defense Comptroller/Chief Financial Officer, a case-by-case approach would be used for additional insourcing of acquisition functions based on critical need, whether a function is inherently governmental, and the benefit demonstrated by a cost-benefit analysis. DOD officials stated that the revised policy effectively curtailed any additional efforts. Based on data provided by DOD’s Human Capital Initiatives Directorate, we estimate that recoding accounted for at least an 11 percent increase in DOD’s overall acquisition workforce. Acquisition officials stated that administrative recoding efforts, which resulted in both increases and decreases to the acquisition workforce, were made to ensure that all acquisition personnel were properly accounted for within each component and career field. According to acquisition officials, recoding was necessary because some personnel were performing acquisition functions the majority of the time, but were not counted as a part of the acquisition workforce. These personnel generally continued to perform the same duties and do not equate to an increase in the capacity of the organization. The recoded personnel are required to meet acquisition professional certification standards, including training, education, and experience requirements. According to Human Capital Initiatives and DOD component officials, recoding primarily occurred in three career fields—facilities engineering, life cycle logistics, and science and technology manager. For example, Air Force Materiel Command estimated that it recoded approximately 3,600 personnel at its maintenance depots as acquisition personnel, some of which were recoded to the life cycle logistics career field. Increases in the number of military positions accounted for 3 percent of the workforce growth. According to Army and Air Force acquisition officials, one of the primary reasons DOD increased the number of military personnel serving in the acquisition workforce was to provide increased capacity. Much of this growth was in the contracting career field during contingency operations in Iraq and Afghanistan. Overall, Human Capital Initiatives statistics show that growth efforts have helped DOD reshape the civilian workforce. In fiscal year 2008, DOD found that about half of its civilian acquisition workforce had 10 years or less before they were eligible for retirement, with far fewer mid-career individuals ready to take their place or provide mentoring and supervision to those workforce members who were early in their career. Defense Acquisition Workforce Development Fund hiring has helped strategically reshape the workforce by bolstering critical functions and building early and mid-career workforce size. Although no specific goals were set, figure 3 shows the progress DOD has made increasing the number of early- career staff (those eligible to retire in 20 years or more) and mid-career staff (those eligible to retire in 11 to 20 years). There have also been improvements in the training and qualifications of the workforce. For example, the percent of the workforce that met certification requirements increased from 58 percent to 79 percent between fiscal years 2008 and 2014. In addition, the number of acquisition personnel with a bachelor’s degree or higher increased from 77 to 83 percent. However, DOD still faces the challenge of an aging workforce, as statistics also show that the average age of the workforce has been static since 2008 at about 45 years, and the percentage of retirement eligible personnel has remained at 17 percent. Defense acquisition officials recognize the risks associated with the loss of very experienced members of the acquisition workforce. These officials noted they are concerned about retaining an adequate number of personnel in the senior career group to provide leadership and continuity for the workforce between 2020 and 2030. While DOD met the overall acquisition growth goal, it did not accomplish the goals set for some career fields. The 2010 acquisition workforce plan identified growth goals, expressed in terms of a percent increase from fiscal year 2008 to fiscal year 2015, for each of the 13 acquisition career fields. The plan indicated that targeted growth in 5 of these priority career fields—auditing, business, contracting, engineering, and program management—would help DOD strategically reshape its acquisition workforce. As of March 2015, our analysis shows that DOD exceeded its planned growth for 7 career fields by about 11,300 personnel, including the priority career fields of auditing and program management. The department did not, however, reach the targets in its growth plan for the other 6 career fields by about 4,400 personnel, including the priority career fields of contracting, business, and engineering (see figure 4). According to military department acquisition officials, shortfalls in contracting and engineering career fields are largely the result of high attrition rates and difficulty in hiring qualified personnel. Despite these challenges, the engineering career field was within 1 percent of its hiring goal. The business career field did not meet its overall growth goal in part because of recoding actions that resulted in a loss to the career field, greater than expected attrition, and Army cost-cutting efforts. Leaders in some of these areas are trying to identify ways to complete activities more efficiently to reduce the impact of shortfalls. For example, the Cost Assessment and Program Evaluation office, which is responsible for completing independent cost estimates for acquisition programs, has started an initiative to develop a centralized database and virtual library of cost and acquisition data so that cost analysts spend less time gathering data and more time analyzing it. Increasing the number of people performing acquisition work is only part of DOD’s strategy to improve the capability of its workforce; another part is ensuring that the workforce has the requisite skills and tools to perform their tasks. DOD developed a five-phased process that included surveys of its employees to assess the skills of its workforce and to identify and close skill gaps. Efforts to complete the process were hindered by low survey response rates and the absence of proficiency standards. Further, DOD has not established time frames for when career fields should conduct another round of competency assessments to assess progress towards addressing previously identified gaps and to identify emerging needs. In October 2009, section 1108 of the National Defense Authorization Act for Fiscal Year 2010 required DOD to include certain information as a part of its acquisition workforce plan, including an assessment of (1) the critical skills and competencies needed by the future DOD workforce for the 7-year period following the submission of the report, (2) the critical competencies of the existing workforce and projected trends in that workforce based on expected losses due to retirement and other attrition, and (3) gaps in DOD’s existing or projected workforce that should be addressed to ensure that DOD has continued access to critical skills and competencies. Subsequently, the April 2010 acquisition workforce plan outlined a five-phased process that each of the 13 career fields was to use to assess the skills of its workforce and to identify and close skill gaps (see figure 5). DOD functional leaders generally relied on input from senior experts to identify the baseline competencies for phase 1, used subject matter experts to identify work situations and competencies contributing to successful performance for phase 2, and solicited feedback on the models through limited testing with the workforce for phase 3. To validate models and assess workforce proficiency in phase 4, career field leaders relied on surveys that were sent to all or a sample of personnel to solicit their assessment of (1) the criticality of each competency, (2) how frequently they demonstrated each competency, and (3) how proficient they were at each competency. For phase 5, among other things, DOD planned to report on the progress made to identify and close skill gaps and ensure that the competencies remained current. As of October 2015, 12 of the 13 career fields had completed at least an initial competency assessment. The production, quality, and manufacturing career field is the only career field that had not completed all of the phases at least once, due primarily to turnover in leadership, according to an Office of the Secretary of Defense official. According to DOD’s Human Capital Initiatives officials, this career field intends to complete its initial competency assessment by the end of 2017. In the interim, this official noted that the functional leader and the functional integrated product team will continue to actively assess workforce gaps and needs and will use other resources, such as the Defense Acquisition University, to address known skill gaps. Until such time, however, this career field may not have the necessary information to fully identify and assess skill gaps, as statutorily required. Assessing workforce proficiency, according to the Office of Personnel Management’s human capital assessment guidance, allows agencies to target their recruitment, retention, and development efforts. DOD planned to collect data on workforce proficiency as a part of the competency assessment process by pairing supervisor and employee responses to questions included in the surveys, but this effort was hindered by low response rates. In a separate, but related, effort DOD has not yet completed efforts to develop proficiency standards for the acquisition career fields, which would ultimately allow leaders to measure employee proficiency against standards that are specific to each career field. Contracting and auditing were the two career fields that were able to pair supervisor and employee responses collected in competency assessments to make observations about workforce proficiency. These results helped senior leaders identify areas where the workforce did not possess the same level of proficiency as supervisors expected. For example, senior contracting leaders determined that, among other things, fundamental contracting skills were needed across entry and mid-career levels of the contracting workforce and currency, breadth and depth of knowledge were needed across mid-career and senior levels. Leaders emphasized the importance of not only mastering the “what,” but in being able to use critical thinking and sound judgment to apply the knowledge, thus mastering the “how.” In response, the contracting senior leaders worked together with the Defense Acquisition University to develop a 4- week research-intensive fundamentals course that provides new hires practical experience using the Federal Acquisition Regulation and the Defense Federal Acquisition Regulation Supplement. The auditing career field overhauled its training curriculum for new auditors to closely tie with government auditing standards, which career field officials stated would play a large role in addressing gaps in the auditing competencies. The new hire curriculum consists of a 2-week onboarding session, followed by a 2-week class in basic contract audit skills, plus another 2-week class focused on applying the skills in specific types of audits. The other 10 career fields that completed a competency assessment relied on staff self-assessments to make observations about workforce proficiency because of low survey response rates, particularly by supervisors. As a result, the Center for Naval Analysis, which conducted surveys for these 10 career fields, generally stated in its reports that the results were less verifiable because they were not validated against supervisor responses and that leaders should exercise caution when extrapolating the results. Overall response rates for these career fields ranged from 13 to 37 percent. Five of the career field leaders we met with stated that they used other reviews and input from functional integrated product teams to complement the survey results and then worked with the Defense Acquisition University to develop or update training classes. The program management career field, for example, conducted two studies, one in 2009 and another in 2014. The studies included interviews of program managers and program executive officers to identify opportunities to improve the proficiency of programs managers through additional training or experience requirements for program management candidates. The studies identified, among other things, the need to improve program managers’ awareness of earned value management, which is a project management technique for measuring performance and progress, and business acumen. In response, the Defense Acquisition University developed a class called “Understanding Industry” which covers such issues as how contractors align their business strategies, finances, and operations to meet corporate goals. The Center for Naval Analysis reported that it did not explicitly identify proficiency gaps as a part of conducting the competency assessment surveys for most career fields because no proficiency standards exist. The Center for Naval Analysis strongly encouraged leadership to set standards based on baseline data gathered in the surveys. DOD began efforts to establish department-wide proficiency standards in 2012 under the Acquisition Workforce Qualification Initiative. Overall, DOD estimated that it would require establishing up to 2,000 standards across the acquisition career fields. However, the project leader stated that it proved difficult to develop a set of standards whose applicability would be common across all personnel, including those with the same position title, because employees perform different acquisition activities across or even within the DOD components. Further, the project leader stated that it became apparent that developing a single database to collect and track experiences of the acquisition workforce would take considerable time and expense and would contribute to the proliferation of systems that an organization would have to support and maintain. The goal of this initiative is now to map competencies for each career field to on-the-job outcomes, with a focus on assessing the quality versus the quantity of the experiences, according to the project leader. Initiative officials are working with the Defense Contract Management Agency to leverage a database that the agency uses to track employee experiences. For now, initiative officials are creating a computer-based tool using Excel software that employees can use to track their individual acquisition experience. The tool is designed to be used by employees to facilitate career development conversations with supervisors. The project leader expects that the tool should be available for use in 2016. According to the Office of Personnel Management, as part of their workforce planning activities, agencies should monitor and evaluate their efforts to address competency gaps on a continuous basis. DOD has not determined how often competency assessments should be conducted; however, five career field leaders we met with stated that assessments should be completed every 3 to 5 years. They stated that this would allow leadership time to gauge the success of efforts to address previous skill gaps, identify current skill gaps, and identify emerging needs. In that regard, the business and contracting career fields recently completed a second round of workforce assessments and are in the process of analyzing results or identifying actions to address gaps. The other 10 career fields that completed an initial competency assessment did so between 2008 and 2012, but have not completed another round of workforce assessments to determine if their workforce improvement efforts were successful and what more needs to be done. Half of these 10 career field leaders indicated that they plan to complete another assessment between 2016 and 2019, or about 5 to 8 years after the initial assessment was conducted for most career fields. Without establishing appropriate time frames to conduct follow-up assessments and completing those assessments, acquisition workforce leaders will not have the data needed to track improvement in the capability of the workforce and focus future training efforts, as called for by Office of Personnel Management standards. DOD generally plans to maintain the current level and composition of the acquisition workforce. DOD has not, however, verified that the current composition of the workforce will meet its future workforce needs. Officials at the Air Force Materiel Command, Army Materiel Command, and Naval Sea Systems Command indicated that they are having difficulties meeting program office needs, especially in the contracting and engineering career fields. These two priority career fields will remain under the levels targeted by DOD’s April 2010 workforce plan, while several other career fields will continue to exceed their targeted level. Further, Human Capital Initiatives has not issued an updated workforce strategy that includes revised career field goals or issued guidance on the use of the Defense Acquisition Workforce Development Fund to guide future hiring decisions. In an April 2015 memorandum, the Under Secretary of Defense for Acquisition, Technology and Logistics stated that it is imperative for the components to sustain and build on the investment made to increase the capacity and capability of the acquisition workforce. Specifically, the components were told to responsibly sustain the acquisition workforce size and make adjustments based on workload demand and requirements. According to January 2015 workforce projections included in budget exhibits that components developed for the fiscal year 2016 president’s budget, the components generally plan to maintain the current level and composition of the civilian and military acquisition workforce through fiscal year 2020, though individual components project slight shifts. For example, these projections indicate that the Army plans to increase the size of its acquisition workforce by almost 2.5 percent by predominantly adding military personnel, while the Air Force projects a less than 1 percent decrease and the Navy projects a decrease of almost 2 percent. The other DOD components are planning to decrease the size of their current workforces collectively by about 5 percent. However, Army officials stated that in September 2015, the Army revised its projection. It now estimates that its acquisition workforce will decrease by about 1,800, or 5 percent, by fiscal year 2020 as a result of recent reductions to the Army’s entire military structure from fiscal year 2016 forward. According to DOD guidance, the component budget exhibits contain estimates of the number of authorized and funded acquisition workforce personnel through the Future Years Defense Program. The estimates do not provide information on components’ workforce projected shortfalls. Our analysis of the budget exhibits shows that to maintain the overall size of the workforce at around 150,000, DOD components collectively plan to spend between $18.5 billion and $19.4 billion annually through fiscal year 2020. This funding will be used to pay acquisition workforce salaries, benefits, training, and related workforce improvement initiatives. The total amount includes about $500 million in planned funding annually for the Defense Acquisition Workforce Development Fund, which will be used to hire about 4,900 new employees through fiscal year 2020 to help sustain the current size of the workforce and continue training and development efforts. Budget documents also indicate, however, that many career fields are projected to continue to be significantly over or under their original growth targets identified in DOD’s April 2010 acquisition workforce plan (see figure 6). Our analysis of the PB-23 submissions indicates that eight career fields will lose about 2,500 positions collectively between 2015 and 2020. One of these career fields, life cycle logistics, is projected to lose almost 1,400 of these positions, but will still be significantly over the growth target established in 2010. Five career fields are expected to grow by a total of about 1,000 positions, with the majority of this growth in the contracting and facilities engineering career fields. The growth expected between 2015 and 2020 will allow purchasing to meet its initial growth target. Despite this projected growth, however, four of the priority career fields— contracting, business, engineering, and audit—will remain under the April 2010 growth targets. Further, concerns about the levels of contracting and engineering personnel were expressed by officials at each of the three major acquisition commands we met with—Air Force Materiel Command, Army Materiel Command, and Naval Sea Systems Command. For example, Air Force Materiel Command has not been able to hire enough personnel to offset a 10 percent attrition rate in the contracting career field, according to career field leaders. It also has shortfalls in engineering personnel, which requires multiple programs to share engineers with particular technical expertise. The command manages personnel shortfalls by assessing risks for its programs and then reallocating staff from programs that are considered to have less risk to new programs or programs that have more risk. Command officials cited several reasons for the shortfalls. First, since fiscal year 2013, the command has initiated 19 new acquisition programs without budgeting for the civilian and military personnel needed to adequately support the entire portfolio of the command’s programs. Second, command officials noted that they have lost some military and civilian acquisition personnel due to DOD-wide cuts in operation and maintenance funding accounts from which these personnel are paid. Finally, officials stated that the command has made cuts to the contractor workforce that more than offset the growth in the military and civilian acquisition workforce to date. Army Materiel Command officials stated that the command is currently experiencing shortfalls in the contracting career field and the shortfall is expected to reach about 800 personnel by fiscal year 2020. The shortfalls are the result of several cost-cutting actions taken by the Army since fiscal year 2011 to implement mandated reductions and caps on future spending. For example, the command’s contracting organization was only able to fund about 3,600 of its authorized 4,000 positions. Officials from that organization stated that they have moved over $100 million in contract actions from overburdened contracting offices to other contracting offices within the command that have additional capacity. Officials estimate that they can only mitigate the impact of about one-third of planned reductions through workload realignments and process changes. In addition, the command has taken on several new missions without gaining additional resources, including contingency contract administration services that were previously performed by the Defense Contract Management Agency and lead responsibility for contracting in Afghanistan, which was previously provided through a joint contracting organization assigned to U.S. Central Command. Naval Sea Systems Command officials stated that the command is experiencing shortfalls in the contracting and engineering career fields, and they are projecting additional shortfalls by fiscal year 2020. One metric the command is tracking to assess contracting workload shows that the number of new contract awards that are delayed into the next fiscal year was approximately 100 for fiscal year 2015. This includes the contract award for the Navy’s Own Ship Monitoring System, a technology for submarine sonar systems, as well as the Aegis Weapon System Modernization production upgrades. The number of new contract awards that are delayed into the next fiscal year is expected to grow to about 430 contracts by fiscal year 2020 as a result of workload increases and shortfalls in the contracting career field. Command officials expect current shortfalls in contracting personnel to be exacerbated by DOD-wide cuts to civilian personnel. In addition, Naval Sea Systems Command engineering officials said that one-third of the command’s technical specifications and standards, which serve as the fleet’s instructional manual on routine ship maintenance, have not been reviewed in the past 20 years. They said that the backlog of specification and standard reviews indirectly contributed to two failures within 5 years of the main reduction gears on a Navy destroyer—a warship that provides multi-mission offensive and defensive capabilities. The failures resulted from an industry change in oil composition that was not addressed by corresponding changes to the Navy’s maintenance standards for the ship. In general, command officials could not provide validated data on the extent of current workforce shortfalls, but each military department is developing models to help better project acquisition program needs and quantify potential shortfalls. Specifically: The Air Force Materiel Command pilot-tested an updated Air Force workforce model in 2015 to assist with personnel planning over the life cycle of a program based on factors such as type of program and life cycle phase. The model projects that the command will have a shortfall of over 1,300 military and civilian acquisition positions by fiscal year 2017, and nearly 1,900 positions by fiscal year 2021. Command officials are working closely with acquisition program managers to validate program needs and to identify weaknesses in the model. Additional functionality is expected to be added over the next several years that will allow the command to target hiring to fill specific workforce gaps. For example, instead of soliciting applications for engineers in general, the command could more specifically target materials engineers. The Assistant Secretary of the Army for Acquisition, Logistics and Technology is in the process of developing personnel planning models that will better allow organizations to forecast their manpower requirements. The program management model was approved by the Army Manpower Analysis Agency and is beginning to be used by program managers. Army Materiel Command officials expect a new contracting model to be available for use later in fiscal year 2016. Additional models for research and development and test and evaluation are also being planned. Naval Sea Systems Command officials stated that they are in the process of developing career field-specific tools to, among other things, forecast needs, help identify skill gaps, and create demand signals for career development opportunities. No date was provided for when these tools will be available. DOD has not issued an updated acquisition workforce strategy to help guide future hiring decisions. According to Human Capital Initiatives officials, budget uncertainties have been the primary reason for the delay. The Director of Human Capital Initiatives noted that senior DOD and military department leadership regularly discuss the state of the acquisition workforce and its capacity to address emerging needs and challenges. The Director also noted that similar discussions occur at various levels within DOD components. As a result, the Director stated that while the size and composition of the workforce differs from what was called for in DOD’s 2010 acquisition workforce plan, DOD’s current and projected workforce largely reflect the decisions made during these discussions. However, according to acquisition officials we met with, DOD components and sub-components make thousands of individual hiring decisions, based not only on the need to obtain critical skills, but to also sustain growth already achieved in a career field, address emerging issues, and meet other priorities. Human Capital Initiatives officials said that they are working to issue an updated acquisition workforce plan in 2016. 10 U.S.C. Section 115b requires that DOD issue a biennial acquisition workforce strategy that, among other things, assesses the appropriate mix of military, civilian, and contractor personnel capabilities and includes a plan of action to meet department goals. Further, as we have previously reported, issuing a workforce strategy and an associated plan of action is crucial for DOD to effectively and efficiently manage its civilian workforce during times of budgetary and fiscal constraint. For example, continuing cuts to operation and maintenance budget accounts and efforts to reduce headquarters spending by 20 percent over the next few years could result in additional reductions to acquisition workforce positions in some career fields. In addition, Section 955 of the National Defense Authorization Act for Fiscal Year 2013 requires DOD to plan to achieve civilian and service contractor workforce savings that are not less than the savings in funding for military personnel achieved from reductions in military strength, which could also affect the size of the acquisition workforce. DOD may also be faced with another round of sequestration cuts, which could result in hiring freezes or workforce reductions. Without issuing an updated workforce strategy, as statutorily required, DOD may not be positioned to meet future acquisition needs. Aligning the use of the Defense Acquisition Workforce Development Fund to high priority workforce needs is also crucial. In the past, some hiring decisions made by DOD components using the Defense Acquisition Workforce Development Fund exceeded initial 2010 career field targets. In addition, over the past 7 years, about 2,700 personnel, or 26 percent of those hired with these funds, were in career fields that were not considered high priority in the 2010 acquisition workforce plan. For example, funds were used to hire about 850 personnel for the life cycle logistics career field that is significantly over its growth target. To focus use of the funds, the Assistant Secretary of the Army for Acquisition, Logistics and Technology issued guidance in March 2013 that identified critical career field priorities for the future acquisition workforce and emphasized the need to balance critical acquisition skills needed with other personnel requirements during times of constrained budgets and limited personnel resources. In fiscal year 2014, the Assistant Secretary of the Navy for Research, Development and Acquisition directed that 75 percent of Navy’s hiring using these funds should be in priority career fields such as contracting, engineering, and business. The Air Force has not issued similar guidance on how to target hiring efforts to meet critical needs. Section 1705 of Title 10, U.S. Code, requires the Human Capital Initiatives Directorate to issue guidance for the administration of the Defense Acquisition Workforce Development Fund. The guidance is to identify areas of need in the acquisition workforce, including changes to the types of skills needed. The Director of Human Capital Initiatives told us, however, that while key stakeholders involved in acquisition workforce planning, such as the Senior Steering Board and defense acquisition career managers, discuss areas of need in the workforce, the office does not issue guidance on how DOD components should prioritize their hiring decisions. Without clearly linking the use of these funds with the strategic goals of the department, components may continue to over-hire in some career fields and not be able to adequately meet critical acquisition program needs in other career fields. DOD has focused much needed attention on rebuilding its acquisition workforce and has used the Defense Acquisition Workforce Development Fund to increase hiring and provide for additional training that supports this effort. This is especially noteworthy given that the department faced sequestration and other cost-cutting pressures over the past several years. Now that the department has surpassed its overall growth goals and has moved into a workforce sustainment mode, the 2010 acquisition workforce plan needs to be updated. Focus should now be placed on reshaping career fields to ensure that the most critical acquisition needs are being met. DOD attempted to strategically reshape its acquisition workforce with the 2010 acquisition workforce plan, but fell short in several priority career fields, including contracting and engineering. An updated plan that includes revised career field goals, coupled with guidance on how to use the Defense Acquisition Workforce Development Fund, could help DOD components focus future hiring efforts on priority career fields. Without an integrated approach, the department is at risk of using the funds to hire personnel in career fields that currently exceed their targets or are not considered a priority. DOD has also made progress in identifying career field competencies, but additional steps are needed to complete this effort. For example, the production, quality, and manufacturing career field has yet to complete its initial competency assessment and DOD has not established time frames to conduct follow-up assessments for the other career fields so that it can determine if skill gaps are being addressed. Office of Personnel Management standards state that identifying skill gaps and monitoring progress towards addressing gaps are essential steps for effective human capital management. Without completing all competency assessments and establishing time frames for completing follow-up assessments, acquisition leaders will not have the data needed to track improvement in the capability of the workforce. To improve DOD’s oversight and management of the acquisition workforce, we are making four recommendations. Specifically, to ensure that DOD has the right people with the right skills to meet future needs, we recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics direct the Director, Human Capital Initiatives to: Issue an updated acquisition workforce plan in fiscal year 2016 that includes revised career field goals; Issue guidance to focus component hiring efforts using the Defense Acquisition Workforce Development Fund on priority career fields; Ensure the functional leader for the production, quality, and manufacturing career field completes an initial competency assessment; and Establish time frames, in collaboration with functional leaders, to complete future career field competency assessments. We provided a draft of this report to DOD for comment. In its written comments, which are reprinted in appendix I, DOD concurred with our recommendations and described the actions it plans to take. DOD also provided technical comments, which we incorporated in the report as appropriate. In response to our recommendation that DOD issue an updated acquisition workforce plan, the department stated that it is currently working on the fiscal year 2016–2021 Defense Acquisition Workforce Strategic Plan, and that it plans to provide the draft plan for review by the end of 2015. The department, however, did not indicate specifically that the updated plan would include revised career field goals. We believe updated career field goals should be included in the plan because they can help inform future hiring decisions and rebalance the size of each career field, if necessary. The department concurred with our recommendation that the Director, Human Capital Initiatives issue guidance to focus hiring efforts using the Defense Acquisition Workforce Development Fund on priority career fields. However, it stated that determining which career fields are a priority is most appropriately determined by the components. The department indicated that the Director, Human Capital Initiatives would work with the components to issue guidance that ensures the Defense Acquisition Workforce Development Fund is used to best meet both enterprise and specific component workforce needs. We believe these actions would meet the intent of our recommendation. In response to our recommendation that the production, quality, and manufacturing career field complete an initial competency assessment, the department stated that it will complete the initial assessment by the end of 2017. In response to our recommendation to establish time frames for completing future career field competency assessments, the department agreed and indicated that it will work with acquisition workforce functional leaders to establish time frames to complete future career field competency assessments, as needed. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Acquisition, Technology and Logistics, and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. Should you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff and other major contributors to this report are listed in appendix II. In addition to the contact named above, Cheryl Andrew, Assistant Director; Miranda Berry; Virginia A. Chanley; Teakoe S. Coleman; Maria Durant; Katheryn Hubbell; Heather B. Miller; Jenny Shinn; Robert Swierczek; Ozzy Trevino; and Alyssa Weir made key contributions to this report. Defense Acquisition Workforce: The Air Force Needs to Evaluate Changes in Funding for Civilians Engaged in Space Acquisition. GAO-13-638. Washington, D.C.: July 8, 2013. Defense Acquisition Workforce: Improved Processes, Guidance, and Planning Needed to Enhance Use of Workforce Funds. GAO-12-747R. Washington, D.C.: June 20, 2012. Defense Contract Management Agency: Amid Ongoing Efforts to Rebuild Capacity, Several Factors Present Challenges in Meeting Its Missions. GAO-12-83. Washington, D.C.: November 3, 2011. Defense Acquisition Workforce: Better Identification, Development, and Oversight Needed for Personnel Involved in Acquiring Services. GAO-11-892. Washington, D.C.: September 28, 2011. Department of Defense: Additional Actions and Data Are Needed to Effectively Manage and Oversee DOD’s Acquisition Workforce. GAO-09-342. Washington, D.C.: March 25, 2009. Acquisition Workforce: Department of Defense’s Plans to Address Workforce Size and Structure Challenges. GAO-02-630. Washington, D.C.: April 30, 2002. Department of Defense Contracted Services DOD Contract Services: Improved Planning and Implementation of Fiscal Controls Needed. GAO-15-115. Washington, D.C.: December 11, 2014. Defense Contractors: Additional Actions Needed to Facilitate the Use of DOD’s Inventory of Contracted Services. GAO-15-88. Washington, D.C.: November 19, 2014. Defense Acquisitions: Update on DOD’s Efforts to Implement a Common Contractor Manpower Data System. GAO-14-491R. Washington, D.C.: May 19, 2014. Defense Acquisitions: Continued Management Attention Needed to Enhance Use and Review of DOD’s Inventory of Contracted Services. GAO-13-491. Washington. D.C.: May 23, 2013. Defense Acquisitions: Further Actions Needed to Improve Accountability for DOD’s Inventory of Contracted Services. GAO-12-357. Washington, D.C.: April 6, 2012. Human Capital: DOD Should Fully Develop Its Civilian Strategic Workforce Plan to Aid Decision Makers. GAO-14-565. Washington, D.C.: July 9, 2014. Human Capital: Strategies to Help Agencies Meet Their Missions in an Era of Highly Constrained Resources. GAO-14-168. Washington, D.C.: May 7, 2014. Human Capital: Additional Steps Needed to Help Determine the Right Size and Composition of DOD’s Total Workforce. GAO-13-470. Washington, D.C.: May 29, 2013. Human Capital: Critical Skills and Competency Assessments Should Help Guide DOD Civilian Workforce Decisions. GAO-13-188. Washington, D.C.: January 17, 2013. Human Capital: DOD Needs Complete Assessments to Improve Future Civilian Strategic Workforce Plans. GAO-12-1014. Washington, D.C.: September 27, 2012. Human Capital: Further Actions Needed to Enhance DOD’s Civilian Strategic Workforce Plan. GAO-10-814R. Washington, D.C.: September 27, 2010. Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. GAO-04-546G. Washington, D.C.: March 1, 2004. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Federal Workforce: OPM and Agencies Need to Strengthen Efforts to Identify and Close Mission-Critical Skills Gaps. GAO-15-223. Washington, D.C.: January 30, 2015. Defense Headquarters: DOD Needs to Reassess Personnel Requirements for the Office of Secretary of Defense, Joint Staff, and Military Service Secretariats. GAO-15-10. Washington, D.C.: January 21, 2015. Sequestration: Observations on the Department of Defense’s Approach in Fiscal Year 2013. GAO-14-177R. Washington, D.C.: November 7, 2013. DOD Civilian Workforce: Observations on DOD’s Efforts to Plan for Civilian Workforce Requirements. GAO-12-962T. Washington, D.C.: July 26, 2012. Defense Workforce: DOD Needs to Better Oversee In-sourcing Data and Align In-sourcing Efforts with Strategic Workforce Plans. GAO-12-319. Washington, D.C.: February 9, 2012.
GAO and others have found that DOD needs to take steps to ensure DOD has an adequately sized and capable acquisition workforce to acquire about $300 billion in goods and services annually. DOD is required by statute to develop an acquisition workforce plan every 2 years. DOD issued a plan in 2010, in which it called for the department to increase the size of the acquisition workforce by 20,000 positions by fiscal year 2015, but has not yet updated the plan. Congress included a provision in statute for GAO to review DOD's acquisition workforce plans. In the absence of an updated plan, this report examines DOD's efforts to (1) increase the size of its acquisition workforce, (2) identify workforce competencies and mitigate any skill gaps, and (3) plan for future workforce needs. GAO analyzed current and projected DOD workforce, budget, and career field data; reviewed completed competency assessments; and obtained insights on workforce challenges from the largest acquisition commands within the Army, Navy and Air Force. The Department of Defense (DOD) has increased the size of its acquisition workforce from about 126,000 in September 2008 to about 153,000 in March 2015. The growth was accomplished by hiring additional civilian personnel, insourcing work previously performed by contractors, adding more military personnel, and re-categorizing existing positions. However, 6 of the 13 acquisition career fields, including 3 priority career fields—contracting, business and engineering—did not meet growth goals. DOD has completed workforce competency assessments for 12 of the 13 acquisition career fields and added training classes to address some skill gaps. It is unclear the extent to which skill gaps remain, in part because 10 of the career fields have not conducted follow-up competency assessments to gauge progress. DOD has not established time frames for doing so. Office of Personnel Management standards state that identifying skill gaps and monitoring progress towards addressing gaps are essential steps for effective human capital management. DOD has not updated its acquisition workforce plan, which would allow it to be better positioned to meet future needs. GAO's analysis of DOD budget information indicates that many career fields will continue to be significantly over or under the growth goals DOD established in 2010, especially in priority career fields such as contracting and engineering. In the past, some hiring decisions made by DOD components using the Defense Acquisition Workforce Development Fund exceeded initial 2010 career field targets. In addition, over the past 7 years, about 2,700 personnel, or 26 percent of those hired with these funds, were in career fields that were not considered high priority in the 2010 acquisition workforce plan. An updated plan that includes revised career field goals, coupled with guidance on how to use the Defense Acquisition Workforce Development Fund, could help DOD components focus future hiring efforts on priority career fields. Without an integrated approach, the department is at risk of using the funds to hire personnel in career fields that currently exceed their targets or are not considered a priority. GAO recommends that DOD complete the remaining competency assessment, establish time frames for conducting follow-up assessments, issue an updated acquisition workforce plan, and issue guidance to prioritize the use of funding. DOD concurred with GAO's recommendations.
In response to GLBA, NAIC has expedited efforts under its Producer Licensing Reciprocity and Uniformity initiative to streamline and simplify the process for allowing producers licensed in one state to become licensed in other states. GLBA required that states enact certain reforms simplifying and bringing more efficiency to the insurance producer licensing process. Traditionally, agents licensed in one state generally had to meet the separate licensing requirements for each state where they wanted to sell insurance. Since licensing requirements differed substantially, this requirement imposed significant burdens on producers in terms of time, effort, and monetary costs. To comply with GLBA, a majority of the states must adopt either uniform licensing requirements or reciprocity by November 2002. With reciprocity, states must accept the decision of another state to approve a license and may not impose any additional licensing requirements. GLBA also gave NAIC responsibility for determining whether a state meets the uniformity or reciprocity provisions. If a majority of regulatory jurisdictions (29 states and territories) do not meet either the uniformity or reciprocity provisions by November 12, 2002, GLBA provides for the establishment of NARAB by the federal government, which would take over producer licensing functions from the states. NAIC developed and promoted the Producer Licensing Model Act (PLMA) to help states comply with GLBA’s reciprocity provisions. To date, many states have passed laws based on PLMA attempting to comply with GLBA’s reciprocity requirements. However, NAIC has not yet officially announced the number of “compliant states” based on its review of the states’ laws and implementation plans. Meanwhile, some states with relatively large insurance markets have expressed concerns that will likely keep these states from implementing fully reciprocal producer licensing practices. These states appear reluctant to “lower” their standards on certain antifraud and consumer protection measures, particularly those related to conducting criminal background checks using fingerprint identification and bond requirements for producer applicants. NAIC continues to address these concerns, which were not fully resolved through PLMA, in its efforts to develop more uniform state producer licensing requirements. While preliminary indications suggest that NAIC is close to certifying enough states to meet the GLBA’s legal requirement, other concerns remain that will likely prevent full reciprocity on producer licensing matters in all states. Factors that may prevent full reciprocity include some states’ reluctance to waive certain antifraud and consumer protection measures and state implementation practices that may be considered nonreciprocal. Although a large number of states have passed some form of PLMA, some states did not remove or waive certain licensing requirements that may conflict with GLBA’s reciprocity provisions. Our review of the checklists submitted to NAIC and discussions with industry representatives and regulators showed that a few states do not appear ready to waive certain existing antifraud and consumer protection requirements. Most commonly, these nonresident licensing requirements are related to criminal history checks (using fingerprint identification) and bond requirements for some producers. NAIC officials had anticipated that these requirements would be major areas of disagreement among states. We observed that some states were reluctant to eliminate their existing requirement to conduct a criminal history check on nonresident applicants using fingerprint identification. For example, California’s insurance regulators said that while the state supports the goals of streamlining and creating more uniformity in state licensing procedures, California would not eliminate its nonresident fingerprinting requirement (and other key existing requirements) in order to satisfy the reciprocity provisions of GLBA. The regulators believed that eliminating this and several other existing requirements to achieve reciprocity with other states would weaken their current standards and consumer protection measures. In Florida, a recently enacted PLMA expresses the state’s desire to meet the reciprocity and uniformity provisions of GLBA but also incorporates nonresident fingerprinting requirements under its consumer protection provisions. According to industry officials, some states continue to maintain fingerprinting requirements despite the passage of some form of PLMA legislation. Some state officials acknowledged that waiving nonresident producer licensing requirements to satisfy GLBA’s reciprocity provisions could theoretically open a window of opportunity for undesirable individuals to enter the insurance industry. For instance, states where insurance regulators do not have the authority to conduct criminal background checks on producer applicants could provide such access. We have previously expressed concern that many insurance regulators lack the authority to conduct criminal background checks on industry applicants (in contrast to regulators in the banking, securities, and futures industries) and have supported actions to help establish such authority. Bond requirements for nonresident producers, intended to protect consumers and states from financial losses resulting from errors or misconduct, have also surfaced as a problematic issue in many states. According to industry observers, bond requirements have proven difficult to change or remove because they are established in state laws and regulations. NAIC commented that such requirements may not be appropriate for a producer seeking to conduct business on a multistate basis, because they do not take into account current commercial realities (e.g., a producer’s annual volume of business is not taken into consideration in determining the amount of a bond). NAIC officials have also voiced concern about the cumulative impact of individual state bonding requirements in the context of facilitating multistate producer licensing. Another issue relates to the postlicensing requirements producers must satisfy after obtaining a license. Licensing requirements waived or removed to satisfy the reciprocity requirements of GLBA could resurface as postlicensing requirements, undermining the benefits of regulatory streamlining. In our review of the checklists submitted to NAIC, we found that many states said they have the authority to waive requirements relating to nonresident licensing. A handful of states also reported having postlicensing requirements that could limit or place conditions on nonresident producer activities. For instance, one state reported that it could waive evidence of company appointments as an application requirement but would ask for this evidence as a postlicensing requirement before the producer could conduct any insurance activity. Overall, we did not identify any significant use of additional postlicensing requirements, but such practices could inhibit the implementation of regulatory reciprocity among states. Although NAIC may be close to certifying enough states to avoid the creation of NARAB, other efforts to achieve greater uniformity must be successful before nationwide reciprocity is realized. Some states, often those with relatively large insurance markets, intend to maintain certain antifraud and consumer protection measures even though such requirements may be inconsistent with GLBA’s reciprocity provisions. For instance, the California Department of Insurance did not support the adoption of NAIC’s PLMA, designed to satisfy GLBA’s reciprocity provision, because “the Model Act does not include several important enforcement tools that are contained in California law presently.” Industry representatives have emphasized that the larger states need to reciprocate (accept the licensing decision of other states) before producers can fully benefit from improvements aimed at streamlining the licensing process to conduct business in multiple states. NAIC’s Uniform Producer Licensing Initiatives Working Group is currently addressing a number of issues related to producer licensing to help states achieve more uniformity. The group’s areas of work include those related to background checks, prelicensing education, continuing education, and definitions for limited lines of insurance. These efforts will also have to address the concerns of states that have been unwilling to “lower the bar” on their existing regulatory requirements. Achieving nationwide reciprocity in the area of producer licensing is tied to the success of these uniformity efforts. However, it remains uncertain whether or when more uniform producer licensing practices will be adopted that satisfy the concerns of those states with the largest insurance markets. Through NAIC’s Speed to Market initiative, state insurance regulators are trying to streamline regulatory processes associated with insurance product approvals to make products available to consumers more quickly. A principal aspect of this initiative is to develop a more centralized product filing and approval process for certain types of insurance products that are sold on a multistate or nationwide basis. NAIC established the Coordinated Advertising, Rate, and Form Review Authority as a vehicle for providing insurers with a single point of filing and approval. However, insurers balked at the initial CARFRA trial, saying the process still incorporated too many individual state requirements beyond a common set of review criteria. In response, NAIC is now exploring the use of an interstate compact as a mechanism for overcoming the issue of having to satisfy the product review and approval criteria of each individual state. Another aspect of this initiative encompasses efforts to improve existing, conventional state-based systems. A notable outcome of these efforts is NAIC’s System for Electronic Rate and Form Filing, or SERFF, which is designed to expedite the mechanics of submitting product rate and policy form filings to regulators. Other efforts to streamline product review and approval processes focus on reducing differences among the states’ product filing requirements and identifying best practices. Many insurers, particularly those in the life and health insurance business, claim they have been at a competitive disadvantage in marketing and selling investment-oriented products because banks and securities firms— their primary competitors in these product lines—can seek regulatory approval from a single regulator. In response, insurance regulators have tried to devise a one-stop filing and approval process for products that will be sold in multiple states. CARFRA is the mechanism that regulators devised to offer the industry a single source for product reviews and approvals. NAIC launched a pilot of the CARFRA product approval process in May 2001 with a single point of filing mechanism, national standards, and disclosure of any additional state requirements or deviations. The CARFRA pilot consisted of regulators from 10 states that agreed to review new product filings on three types of life and health insurance products: term life, individual annuities, and individual medical supplements. CARFRA’s centralized product review and approval process was based on national standards along with consideration of individual state standards. NAIC’s goals were to be able to process a product filing within 30 days of receipt to CARFRA if the product conformed to national standards and to process any “outlier” filings within 60 days—those product filings that conformed to the national standards but required further review against the variances for the states in which the products were to be sold. After CARFRA’s decision, each state had the option of either accepting or rejecting the product. The CARFRA process also took advantage of technology enhancements utilizing SERFF. Since the launch date, only two filings have been received under the CARFRA process. According to NAIC, industry representatives said that CARFRA was not attractive because too many state deviations to the national standards existed. In general, the larger states participating in the CARFRA pilot program had the most deviations, often requiring the submission of additional forms and documentation beyond that necessary to satisfy the common review criteria. In addition, industry observers said that CARFRA was abandoned because participation in it was voluntary and it had no legitimate enforcement authority as a regulatory entity. After rethinking the CARFRA process, NAIC has considered several alternative methods of streamlining the product approval process. Instead of totally disregarding the CARFRA process, NAIC opted to restructure it as an interstate compact, building on the processes and national standards already developed. NAIC is currently finalizing a proposal for an interstate compact that would establish a commission known as the Interstate Insurance Commission for Annuities, Life Insurance, Disability Income, and Long-Term Care Products to set standards and streamline review and approval processes for such products. NAIC is currently soliciting input on a draft interstate compact and intends to finalize a version that state regulators can vote on at the fall national meeting in September 2002. The compact would require states to delegate product review and approval authority on certain products to the new commission. As well as reviewing and approving certain types of insurance products, this entity would also have the authority to set standards. The proposed interstate compact focuses on annuity, life insurance, disability income, and long-term care products. State insurance regulators have recognized that some life and annuity products are fundamentally distinguishable from other types of insurance products (e.g., property and casualty), since many products sold by life insurers have evolved to become investment products. Consequently, these investment-oriented products face direct competition from products offered by depository institutions and securities firms. According to NAIC, competitive pressures have provided the impetus to develop more streamlined product approval processes for certain insurance products. NAIC hopes the commission established through an interstate compact will help the states implement a more streamlined product review and approval process. The new commission would develop and implement national standards for certain life and annuity insurance products that would supersede the standards of member states that enact enabling legislation for the compact (compacting states). These participating states would then consider adherence to the national standards as having the force and effect of statutory law. Up to now, the states have not generally eliminated their individual deviations to a common set of review criteria. Compacting states must enact the compact into law, effectively ceding their authority to review and approve the specified insurance products to the commission. As proposed, the commission provides for the establishment of a 14-member management committee to manage the affairs of the commission. Six permanent committee members would represent the compacting states with the largest premium volume for annuities and life insurance products. Other compacting states would fill the remaining board member positions on a rotating basis. Geographic considerations would also be used in establishing the management committee. Additionally, the commission can establish product standards only after legislative enactment of the compact by 12 states, and can review products and render approvals or disapprovals on products only after legislative enactment of the compact by 26 states. The impetus for exploring the use of interstate compacts appears to be an increased sense of urgency to resolve current product approval issues and a realization among state officials that regulators have gone as far as they can to streamline product approval processes after the CARFRA trial setback. To overcome industry objections to state deviations beyond CARFRA’s review criteria, state lawmakers would have had to change their states’ product review and approval requirements to a common, uniform set of criteria. NAIC concluded that an interstate compact presented the best way to accomplish uniform product review and approval standards along with a single point of filing mechanism. The success of NAIC’s Speed to Market initiative largely hinges on whether or not a significant number of state legislatures agree to cede their regulatory authority to a separate entity on certain insurance product standards and approvals. Proponents of interstate compacts believe such an approach could be successful if the compact entity develops fair rules, disclosure and due process requirements, sunshine rules (allowing regulators to revisit and decide whether to continue with an interstate compact approach after a specified date), and other informational filing requirements and processes. In contrast, other industry observers believe states have little motivation to change to a single point of filing process, in part because of considerable differences in approaches toward product approvals and consumer protection measures. It remains uncertain how many states will pass enabling legislation to establish interstate compacts for product approval functions or whether states with large insurance markets will embrace this approach. NAIC’s Speed to Market initiative has also included efforts to improve existing conventional state-based product review and approval processes. Regardless of whether a more centralized process is used for certain types of life and health products, existing state-based review and approval processes will continue to be used for property and casualty products and many other life and health products for the foreseeable future. NAIC ‘s improvement efforts in this area, better known as Improvement to State- Based Systems, aim to enhance states’ rate, form, and advertising review units by reforming and standardizing their approval processes. One of the most notable advances in improving state-based product review and approval processes has been SERFF, which offers a standard electronic form for new product filings with the states. SERFF enables regulators to receive, comment on, and approve or reject insurance industry rate and form filings electronically. SERFF is becoming increasingly popular, though it is not available for all types of products in each state. At its summer national meeting, NAIC reported that 50 states and the District of Columbia were licensed to accept product filings through SERFF and that 474 companies were licensed to use the system. Several industry representatives we spoke with acknowledged the merits of SERFF but explained that it still does not resolve more fundamental issues related to differences in product review and approval processes across states, many of which are based on statutory requirements. Additionally, to the extent that some states do not fully utilize SERFF for all lines of insurance, the cost benefit is diminished for insurers if they have to maintain a second paper product filing system as well. NAIC has also developed the Review Standards Checklist that gives insurers information on state rate and form filing requirements in a common format by product line. Other efforts under NAIC’s Improvements to State-Based Systems focus on reviewing and eliminating “unnecessary” product filing requirements that have accumulated over time. In particular, NAIC and state regulators are trying to identify and reduce those regulations that no longer provide useful oversight value as well as “desk-drawer” rules that have evolved over time but that are not specified by statute, such as a requirement to use a certain type of form. NAIC has also developed a model law aimed at streamlining the product approval process for commercial property and casualty insurance. The Property and Casualty Commercial Rate and Policy Form Model Law, adopted by NAIC in March 2002, would ease some of the current state rate and form submission requirements if adopted by the states. The model recommends a “use and file” regulatory approach for commercial rates and a “file and use” approach for commercial policy forms. Under this model law, notices of commercial rate changes would be filed for informational purposes only and not subject to approval. Commercial policy forms would be filed 30 days prior to their use and would be subject to regulatory review and approval. One industry association pointed out that regulators from two states with large insurance markets said the model would not be adopted in their states. Trade representatives we spoke with could not speculate on the model law’s prospects for passage at the state level, but indicated that its chances for approval faced challenges because commercial rates have risen substantially in the past year, exacerbated further by the September 11th attacks. NAIC’s initiative to foster “national treatment of companies” has been revised since its inception and is now focused on making improvements to existing state processes related to insurer licensing. This initiative and others were highlighted in NAIC’s Statement of Intent: The Future of Insurance Regulation, endorsed by NAIC in March 2000 in response to GLBA and changes in the financial services sector. Initially, efforts under the National Treatment of Companies initiative were directed at centralizing oversight for multistate insurers. Now renamed National Treatment and Coordination, the initiative is currently aimed at streamlining state-based review processes and application submissions for company licenses. Many of NAIC’s efforts under this initiative have focused on implementing technology to support a common electronic application form, the Uniform Certificate of Authority Application, or UCAA. Like developments under the Speed to Market initiative, enhancements to the process of submitting forms have outpaced efforts to develop common review and approval criteria. Initially, the National Treatment of Companies initiative encompassed movement toward a single, unified process for supervising multistate insurers. Oversight functions such as licensing reviews, financial solvency monitoring, and market conduct oversight would have been conducted through a more centralized, streamlined process. However, as we previously reported in 2001, state regulators largely abandoned the goal of centralizing regulatory oversight for multistate insurers under this initiative and focused their efforts on improving existing company licensing processes. Some efforts to streamline other regulatory processes for large, multistate insurers have been shifted to other NAIC working groups. For instance, NAIC is undertaking an effort to better coordinate and execute financial analysis and examination activities among regulators that oversee affiliated insurers from multiple states under a holding company structure. From its inception, NAIC and state regulators tried to devise an operational concept for a “national treatment” program that would offer insurers a state-based system that could provide the same efficiencies in many areas of oversight as a federal charter for insurance companies. Many of the options considered were based on a centralized regulatory function that often allowed the insurers’ state of domicile to perform regulatory activities on behalf of the other states. State regulators ultimately rejected a national treatment concept covering a broad array of regulatory oversight functions based on deference to insurers’ domiciliary state. Furthermore, a planned test of a national treatment program in 2001 was cancelled. Activity on this initiative is now focused on streamlining existing state-based company licensing processes for the benefit of insurers that wish to conduct business in multiple states. Current efforts under NAIC’s National Treatment and Coordination initiative are focused on developing more streamlined state-based application and review processes for insurer licensing. Much of NAIC’s work on this initiative centers on the implementation of a common electronic application form, the UCAA. According to NAIC, this form is now available for use in all states. Closely tied to the development of the UCAA are efforts to develop a more common, uniform set of review criteria for insurer applications. The UCAA offers insurance companies a web-based, electronic application form to obtain a license in any state. Although the application would still be submitted to and reviewed by individual state insurance departments, the format would remain the same and could be submitted electronically. The UCAA provides formats for newly formed companies seeking a Certificate of Authority in their domicile state, for existing companies desiring to expand their business into other states, and for existing insurers that want to amend their existing Certificate of Authority. While the technology supporting a common application form has been developed, regulators have yet to agree on a common set of review criteria related to insurer licensing. In the absence of uniform criteria, insurers must separately submit supplemental applications beyond the UCAA information to individual states, often in paper form. Industry representatives maintain that these separate application requirements negate some of the benefits of using the UCAA form rather than conventional state application forms. NAIC and state regulators continue striving to develop more uniform review criteria for licensing insurers. In April 2002, NAIC provided documentation on 91 additional state-specific requirements beyond those in the UCAA application. Again, as was the case with the other initiatives, a principal issue in developing a common set of licensing review criteria has been the challenge of addressing each state’s individual requirements. Through its Accelerated Licensure Evaluation and Review Techniques (ALERT) program, NAIC and state regulators are trying to reduce these additional state requirements (by 40 percent this year), particularly those not based on state statutes. While efforts to implement UCAA have been successful from a technical perspective, its common use in conjunction with a more standardized licensing review process has not yet materialized and remains uncertain. In this statement, we have discussed three of the initiatives outlined in NAIC’s Statement of Intent for regulatory modernization—licensing nonresident producers (Producer Licensing Reciprocity and Uniformity), approving new products (Speed to Market), and coordinating the oversight of companies that operate in multiple states (National Treatment of Companies). While it appears that NAIC is close to certifying enough states to meet GLBA’s reciprocity requirements before November 2002 to avoid the creation of NARAB, several states, including some of the largest, either will not have full reciprocity or will satisfy this requirement only by temporarily waiving—not eliminating—statutory requirements for nonresident producers. Similarly, the states’ effort to streamline the product approval process—CARFRA—failed largely because, even in the 10 states that conducted the pilot, individual states would not give up state-specific requirements that they believed were important. Finally, as we pointed out in our earlier reports, the original objectives of National Treatment—providing regulatory treatment for “national companies” comparable to that under a single federal regulator—were quickly narrowed to focus on the implementation of the UCAA, a single application form that companies can submit to multiple states when applying for a license to sell insurance. Even in the case of the UCAA, which has been adopted by all states, individual states have retained additional state-specific requirements because they believe that the UCAA, by itself, lacked some important features, such as fingerprinting of company principals. While the specific details of state regulators’ actions in each of these areas have varied, there have been similarities in the pattern of accomplishment. In each case, improvements, sometimes dramatic, have been made in efficiency by streamlining and applying technology, for example, standardizing forms and using technology to submit applications for licensing or product approval. There has been considerably less success in reaching agreement on the more substantive underlying issues. In each case, some states that consider themselves to be stricter or to have more consumer protections have been reluctant or have refused to lower their standards. If the objective of NAIC’s agenda of regulatory reform and modernization is simply to have all states agree, then what has occurred thus far may be considered a failure. However, if the objective is more uniformity and reciprocity with an overall improvement in regulatory performance, then the holdout states may be the only defense against the weakening of both regulatory oversight and consumer protections. We do not suggest that every individual state deviation or objection is appropriate or desirable. However, if some states did not object to giving up fingerprinting, for example, as a means of conducting in-depth criminal and regulatory history background checks of agents or company owners and management, consumers would likely be more at risk and regulation would be less effective. In that case, neither uniformity nor reciprocity would represent regulatory progress. For its part, we believe NAIC has made a concerted effort in promoting more uniform regulatory processes and requirements. NAIC has also demonstrated successes in implementing technology to improve efficiencies in licensing and product approval processes. Now, continuing success on many regulatory streamlining efforts desired by industry depend on state legislatures’ willingness to trust other regulatory entities, either other states or entities such as the commission created by the compact, with certain regulatory functions and decision-making authority. Many states, often with the largest insurance markets, are not likely to take such a step unless they are convinced that other states and regulatory entities operate under a set of standards comparable to their own.
The National Association of Insurance Commissioners (NAIC), through its Accreditation Program, has made considerable progress in achieving uniformity among state insurance regulators. In addition, competitive pressures from further consolidation in the financial services sector and enactment of the Gramm-Leach-Bliley Act has focused attention on regulator reforms in the insurance industry. NAIC's Producer Licensing Reciprocity and Uniformity initiative aims to streamline the licensing process for selling insurance in multiple states. State regulators are also trying to streamline regulatory processes to bring new insurance products to market more quickly. NAIC's Speed to Market initiative focuses both on developing a more centralized filing and approval process for life and health insurance products and on improving existing state-based approval processes for other types of products. Finally, NAIC's National Treatment of Companies initiative aims to facilitate the licensing process for conducting business on a multistate basis. However, NAIC and the states face significant challenges in implementing their initiatives.
The nation’s long-term fiscal outlook is daunting under any realistic policy scenarios and assumptions. For over 14 years, GAO has periodically prepared various long-term budget simulations that seek to illustrate the likely fiscal consequences of our coming demographic challenges and rising health care costs. Indeed, the health care area is especially important because the long-term fiscal challenge is largely a health care challenge. While Social Security is important because of its size, health care spending is both large and projected to grow much more rapidly. Our most recent simulation results illustrate the importance of health care in the long-term fiscal outlook as well as the imperative to take action soon. These simulations show that over the long term we face large and growing structural deficits due primarily to known demographic trends and rising health care costs. These trends are compounded by the presence of near-term deficits arising from new discretionary and mandatory spending as well as lower federal revenues as a percentage of the economy. Continuing on this imprudent and unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Our current path will also increasingly constrain our ability to address emerging and unexpected budgetary needs and increase the burdens that will be faced by our children, grandchildren, and future generations of Americans. Figures 1 and 2 present our long-term simulations under two different sets of assumptions. For both simulations, Social Security and Medicare spending is based on the 2006 Trustees’ intermediate projections, and we assume that benefits continue to be paid in full after the trust funds are exhausted, although current law does not provide for such. Medicaid spending is based on the Congressional Budget Office’s (CBO) December 2005 long-term projections under its midrange assumptions. In figure 1, we start with CBO’s 10-year baseline, constructed according to the statutory requirements for that baseline. Consistent with these specific yet unrealistic requirements, discretionary spending is assumed to grow with inflation for the first 10 years and tax cuts scheduled to expire are assumed to expire. After 2016, discretionary spending and revenue are held constant as a share of gross domestic product (GDP) at the 2016 level. Under this fiscally restrained scenario, spending for Social Security and health care programs would grow to consume over three-quarters of federal revenues by 2040. In figure 2, two assumptions are changed: (1) discretionary spending is assumed to grow with the economy after 2006 rather than merely with inflation, and (2) all expiring tax provisions are extended. In this less restrained but possibly more realistic scenario, federal revenues will cover little more than interest on the large and growing federal debt by 2040. While many alternative scenarios could be developed incorporating different combinations of possible policy choices and economic assumptions, these two scenarios can be viewed as possible “bookends” to a range of possible outcomes. Budget flexibility—the ability to respond to unforeseen events—is key to being able to successfully deal with the nation’s and the world’s uncertainties. By their very nature, mandatory spending programs— entitlement programs like Medicare and Social Security—limit budget flexibility. They are governed by eligibility rules and benefit formulas, which means that funds are spent as required to provide benefits to those who are eligible and wish to participate. As figure 3 shows, mandatory spending has grown as a share of the total federal budget. For example, mandatory spending on programs (i.e., mandatory spending excluding interest) has grown from 27 percent in 1965—the year Medicare was created—to 42 percent in 1985 to 53 percent last year. (Total spending not subject to annual appropriations—mandatory spending and net interest— has grown from 34 percent in 1965 to 61 percent last year.) Under both the CBO baseline estimates and the President’s Budget, this spending would grow even further. Figure 3 illustrates that while it is important to control discretionary spending, the real challenge is mandatory spending. Accordingly, substantive reform of the major health programs and Social Security is critical to recapturing our future fiscal flexibility. The aging population and rising health care costs will have significant implications not only for the budget but also our economy and competitive posture. Figure 4 shows the total future draw on the economy represented by Social Security, Medicare, and Medicaid. Under the 2006 Trustees’ intermediate estimates and CBO’s 2005 midrange and long-term Medicaid estimates, spending for these entitlement programs combined will grow to over 15 percent of GDP in 2030 from today’s 8.9 percent. It is clear that taken together, Social Security, Medicare, and Medicaid represent an unsustainable burden on the federal budget and future generations. Ultimately, the nation will have to decide what level of federal benefits and spending it wants and how it will pay for these benefits. While Social Security, Medicare, and Medicaid are the major drivers of the long-term spending outlook in the aggregate, they are not the only promises the federal government has made for the future. The federal government undertakes a wide range of responsibilities, programs, and activities that may either obligate the government to future spending or create an expectation for such spending. Specific fiscal exposures vary widely as to source, likelihood of occurrence, magnitude, and strength of the government’s legal obligations. If we think of fiscal exposures as extending from explicit liabilities (like military and civilian pensions) to specific contingencies (like pension, flood, and other federal insurance programs) to the commitments implicit in current policy and/or public expectations (like the gap between the present value of future promised and funded Social Security and Medicare benefits), the federal government’s fiscal exposures totaled more than $46 trillion at the end of 2005, up from about $20 trillion in 2000. This translates into a burden of about $156,000 per American, or approximately $375,000 per full-time worker—more than double what it was in 2000. These amounts are growing every second of every minute of every day due to continuing deficits, known demographic trends and compounding interest costs. Many are beginning to realize that difficult choices must be made, and soon. A crucial first step in acting to improve our long-term fiscal outlook will be to face facts and identify the many significant commitments already facing the federal government. If citizens and government officials come to better understand our nation’s various fiscal exposures and their implications for the future, they are more likely to insist on prudent policy choices today and sensible levels of fiscal risk in the future. How do we get started? Today you are focusing on budget process improvements. That’s a good start. While the process itself cannot solve the problem, it is important. It can help policymakers make tough but necessary choices today rather than defer them until tomorrow. Restoration of meaningful budget controls—budgetary caps and a pay-as- you-go (PAYGO) rule on both the tax and spending side of the ledger—is a start toward requiring that necessary trade-offs be made rather than delayed. Although the restoration of caps and a PAYGO rule are important, they are not enough. Among the characteristics a budget process needs for that to happen are increased transparency and better incentives, signals, triggers and default mechanisms to address the fiscal exposures/commitments the federal government has already made and better transparency for and controls over the long-term fiscal exposures/commitments that the federal government is considering. Let me elaborate. There is broad consensus among observers and analysts who focus on the budget that the controls contained in the expired Budget Enforcement Act constrained spending for much of the 1990s. In fact, annual discretionary budget authority actually declined in real terms during the mid-1990s. I endorse the restoration of realistic discretionary caps and PAYGO discipline applied to both mandatory spending and revenue legislation. But the caps can only work if they are realistic; while caps may be seen as tighter than some would like, they are not likely to bind if they are seen as totally unreasonable given current conditions. While PAYGO discipline constrained the creation or legislative expansion of mandatory spending and tax cuts, it accepted the existing provisions of law as given. Looking ahead, the budget process will need to go beyond limiting expansions. Cost increases in existing mandatory programs cannot be ignored and the base of existing spending and tax programs must be reviewed and re- engineered to address our long-range fiscal gap. Specifically, as I have said before, I would like to see a process that forces examination of “the base” of the federal government—for major entitlements, for other mandatory spending, and for so-called “discretionary” spending (those activities funded through the appropriations process). Reexamining “the base” is something that should be done periodically regardless of fiscal condition—all of us have a stewardship obligation over taxpayer funds. As I have said before, we have programs still in existence today that were designed 20 or more years ago—and the world has changed. I would suggest that as constraints on discretionary spending continue to tighten, the need to reexamine existing programs and activities becomes greater. One of the questions this Congress is grappling with— earmarks—can be seen in this context. Whatever the agreed-upon level for discretionary spending, the allocation within that total will be important. How should that allocation be determined? What sort of rules will you want to impose on both the allocation across major areas (defense, education, etc.) and within those areas? By definition, earmarks specify how some funds will be used. How will the process manage them? After all, not all earmarks are bad but many are highly questionable. It is not surprising that in times of tight resources, the tension between earmarks and flexibility will likely rise. Although mandatory spending is not amenable to caps, such spending need not—and should not—be permitted to be on autopilot and grow to an unlimited extent. Since the spending for any given entitlement or other mandatory program is a function of the interaction between eligibility rules and the benefit formula—either or both of which may incorporate exogenous factors such as economic downturns—the way to change the path of spending for any of these programs is to change their rules or formulas. We recently issued a report on “triggers”—some measure that when reached or exceeded, would prompt a response connected to that program. By identifying significant increases in the spending path of a mandatory program relatively early and acting to constrain it, Congress may avert much larger and potentially disruptive financial challenges and program changes in the future. A trigger is a measure and a signal mechanism—like an alarm clock. It could trigger a “soft” response—one that calls attention to the growth rate of the level of spending and prompts special consideration when the threshold or target is breached. The Medicare program already contains a “soft” response trigger: the President is required to submit a proposal for action to Congress if the Medicare Trustees determine in 2 consecutive years that the general revenue share of Medicare spending is projected to exceed 45 percent during a 7-fiscal-year period. The most recent Trustees’ report to Congress for the first time found that the general revenue share of financing is projected to exceed that threshold in 2012. Thus, if next year’s report again concludes that it will exceed the threshold during the 7- fiscal-year period, the trigger will have been tripped and the President will be required to submit his proposal for action. Soft responses can help in alerting decision makers of potential problems, but they do not ensure that action to decrease spending or increase revenue is taken. In contrast, a trigger could lead to “hard” responses under which a predetermined, program-specific action would take place, such as changes in eligibility criteria and benefit formulas, automatic revenue increases, or automatic spending cuts. With hard responses, spending is automatically constrained, revenue is automatically increased, or both, unless Congress takes action to override—the default is the constraining action. For example, this year the President’s Budget proposes to change the Medicare trigger from solely “soft” to providing a “hard” (automatic) response if Congress fails to enact the President’s proposal. Any discussion to create triggered responses and their design must recognize that unlike controls on discretionary spending, there is some tension between the idea of triggers and the nature of entitlement and other mandatory spending programs. These programs—as with tax provisions such as tax expenditures—were designed to provide benefits based on eligibility formulas or actions as opposed to an annual decision regarding spending. This tension makes it more challenging to constrain costs and to design both triggers and appropriate responses. At the same time, with less than 40 percent of the budget under the control of the annual appropriations process, considering ways to increase transparency, oversight, and control of mandatory programs must be part of addressing the nation’s long-term fiscal challenges. Besides triggers, transparency of existing commitments would be improved by requiring the Office of Management and Budget (OMB) to report annually on fiscal exposures—the more than $46 trillion figure I mentioned earlier—including a concise list, description, and cost estimates, where possible. OMB should also ensure that agencies focus on improving cost estimates for fiscal exposures. This should complement and support continued and improved reporting of long-range projections and analysis of the budget as a whole to assess fiscal sustainability and flexibility. Others have embraced this idea for better reporting of fiscal exposures. Senator Voinovich has proposed that the President report each January on the fiscal exposures of the federal government and their implications for the long-term financial health and Senator Lieberman introduced legislation to require better information on liabilities and commitments. This year Representatives Cooper, Chocola, and Kirk have sponsored legislation also aimed at improving the attention paid to our growing federal commitments. And, in his last few budgets the President has proposed that reports be required for any proposals that would worsen the unfunded obligations of major entitlement programs. These proposals provide a good starting point for discussion. Reporting is a critical first step—but, as I noted above, it must cover not only new proposals but also existing commitments, and it should be accompanied by some incentives and controls. We need both better information on existing commitments and promises and information on the long-term costs of any new significant proposed spending increases or tax cut. Ten-year budget projections have been available to decision makers for many years. We must build on that regime but also incorporate longer-term estimates of net present value (NPV) costs for major spending and tax commitments comprising longer-term exposures for the federal budget beyond the 10- year window. Current budget reporting does not always fully capture or require explicit consideration of some fiscal exposures. For example, when Medicare Part D was being debated, much of the debate focused on the 10-year cost estimate—not on the long-term commitment that was obviously much greater. While the budget was not designed to and does not provide complete information on long-term cost implications stemming from some of the government’s commitments when they are made, progress can and should be made on this front. For example, we should require NPV estimates for major proposals—whether on the tax side or the spending side—whose costs escalate outside the 10-year window. And these estimates should be disclosed and debated before the proposal is voted on. Regarding tax provisions, it is important to recognize that tax policies and programs financing the federal budget can be reviewed not only with an eye toward the overall level of revenue provided to fund federal operations and commitments, but also the mix of taxes and the extent to which the tax code is used to promote overall economic growth and broad-based societal objectives. In practice, some tax expenditures are very similar to mandatory spending programs even though they are not subject to the appropriations process or selected budget control mechanisms. Tax expenditures represent a significant commitment and are not typically subject to review or reexamination. This should not be allowed to continue nor should they continue to be largely in the dark and on autopilot. Finally, the growing use of emergency supplemental appropriations raises concerns that an increasing portion of federal spending is exempt from the discipline and trade-offs of the regular budget process. Some have expressed concern that these “emergency” supplementals are not always used just to meet the needs of unforeseen emergencies but also include funding for activities that could be covered in regular appropriation acts. According to a recent Congressional Research Service report, after the expiration of discretionary limits and PAYGO requirements at the end of fiscal year 2002, supplemental appropriations net of rescissions increased the budget deficit by almost 25 percent per year. On average, the use of supplemental appropriations for all purposes has grown almost 60 percent each year, increasing from about $17 billion in fiscal year 2000 to about $160 billion in fiscal year 2005. Constraining emergency appropriations to those which are necessary (not merely useful or beneficial), sudden, urgent, unforeseen, and not permanent has been proposed in the past. The issue of what constitutes an emergency needs to be resolved and discipline exerted so that all appropriations for activities that are not true emergencies are considered during regular budget deliberations. We cannot grow our way out of our long-term fiscal challenge. We have to make tough choices and the sooner the better. A multi-pronged approach is necessary: (1) revise existing budget processes and financial reporting requirements, (2) restructure existing entitlement programs, (3) reexamine the base of discretionary and other spending, and (4) review and revise tax policy and enforcement programs. Everything must be on the table. Fundamentally, we need to undertake a top-to-bottom review of government activities to ensure their relevance and fit for the 21st century and their relative priority. Our report entitled 21st Century Challenges: Reexamining the Base of the Federal Government presents illustrative questions for policymakers to consider as they carry out their responsibilities. These questions look across major areas of the budget and federal operations, including discretionary and mandatory spending and tax policies and programs. We hope that this report, among other things, will be used by various congressional committees as they consider which areas of government need particular attention and reconsideration. The understanding and support of the American people will be critical in providing a foundation for action. The fiscal risks I have discussed, however, are a long-term problem whose full impact will not likely be felt for some time. At the same time, they are very real and time is currently working against us. The difficult but necessary choices we face will be facilitated if the public has the facts and comes to support serious and sustained action to address the nation’s fiscal challenges. That is why if an Entitlement and Tax Reform Commission is created to develop proposals to tackle our long-term fiscal imbalance, its charter may have to include educating the public as to the nature of the problem and the realistic solutions. While public education may be part of a Commission’s charge, we cannot wait for it to begin. As you may know, the Concord Coalition is leading a public education effort on this issue and I have been a regular participant. Although along with Concord the core group is the Heritage Foundation, the Brookings Institution, and the Committee for Economic Development, others are also actively supporting and participating in the effort—the state treasurers, auditors and comptrollers, the American Institute of Certified Public Accountants, AARP, and the National Academy of Public Administration. I am pleased to take part in this national education and outreach effort to help the public understand the nature and magnitude of the long-term financial challenge facing this nation. This is important because while process reform can structure choices and help, broad understanding of the problem is also essential. After all, from a practical standpoint, the public needs to understand the nature and extent of our fiscal challenge before their elected representatives are likely to act. Thank you, Mr. Chairman. This concludes my prepared remarks. I would be happy to answer any questions you may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information on this testimony, please contact Susan J. Irving at (202) 512- 9142 or [email protected]. Individuals making key contributions to this testimony include Christine Bonham, Assistant Director; Carlos Diz, Assistant General Counsel; and Melissa Wolf, Senior Analyst. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The nation's long-term fiscal outlook is daunting. While the budget process has not caused the problems we face, the absence of meaningful budget controls and other mechanisms has served to compound our fiscal challenge. Conversely, a process that illuminates the looming fiscal pressures and provides appropriate incentives can at least help decision makers focus on the right questions. Meaningful budget controls and other mechanisms can also help to assure that difficult but necessary choices are made. The budget process needs to provide incentives and signals to address commitments the government has already made and better transparency for and controls on the long-term fiscal exposures being considered. Improvements would include the restoration of realistic discretionary caps; application of pay-as-you-go (PAYGO) discipline to both mandatory spending and revenue legislation; the use of "triggers" for some mandatory programs; and better reporting of fiscal exposures. Over the long term we face a large and growing structural deficit due primarily to known demographic trends and rising health care costs. Continuing on this imprudent and unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Our current path will also increasingly constrain our ability to address emerging and unexpected budgetary needs and increase the burdens that will be faced by our children, grandchildren, and future generations. The budget process itself cannot solve this problem, but it can help policymakers make tough but necessary choices. If citizens and government officials come to better understand various fiscal exposures and their implications for the future, they are more likely to insist on prudent policy choices today and sensible levels of fiscal risk in the future. We cannot grow our way out of our long-term fiscal challenge. We must make tough choices and the sooner the better. A multi-pronged approach is needed: (1) revise existing budget processes and financial reporting requirements, (2) restructure existing entitlement programs, (3) reexamine the base of discretionary and other spending, and (4) review and revise tax policy and enforcement programs--including tax expenditures. Everything must be on the table and a credible and effective Entitlement and Tax Reform Commission may be necessary. Fundamentally we need a top-to-bottom review of government activities to ensure their relevance and fit for the 21st century and their relative priority.
U.S. Attorneys prosecute criminal cases brought forward by the federal government, prosecute and defend civil cases in which the United States is a party, and collect debts owed to the federal government that are administratively uncollectible. EOUSA was established in 1953 as a component of the Department of Justice to, among other things, provide general executive assistance and administrative and operational support to the 93 USAOs located throughout the 50 states, the District of Columbia, Guam, the Marianas Islands, Puerto Rico, and the U. S. Virgin Islands and to coordinate with other Department of Justice organizational units and other federal agencies on behalf of the U.S. Attorneys. One of EOUSA’s key responsibilities is managing the USAOs’ IT resources, including preparing their annual IT budget submissions and supporting their acquisition and maintenance of IT assets. IT plays an important role in helping the USAOs meet their mission objectives and, according to EOUSA planning documents, the USAOs’ reliance on IT is to increase in response to expected growth in the number and complexity of their cases. Currently, EOUSA manages an IT environment consisting of central and distributed computing and communication resources in Washington, D.C., and 93 USAOs, respectively. Connectivity among these offices, Justice headquarters, and Justice’s Data Center in Rockville, MD, is through a virtual private network (VPN) connection on the Justice Consolidated Network (JCN), with such security safeguards as firewalls between USAO local area networks and JCN. The VPN/firewall combination, which provides the foundation for secure communications between EOUSA and the sites mentioned above, is currently being replaced. Figure 1 generally depicts EOUSA’s network topology. The USAOs’ support is also provided by such application systems as the Legal Information Office Network System, which is a case management system that compiles, maintains, and tracks information about defendants, crimes, criminal charges, court events, and witnesses, and the Victim Notification System, which notifies crime victims of the status of their cases and assists with checking compliance with regulations and policies concerning victim notification. Recognizing the importance of IT to achieving the USAOs’ mission, EOUSA appointed a Chief Information Officer (CIO) in May 2001 and assigned the CIO accountability and responsibility for managing central and distributed IT resources and services, including managing the IT budget for the office and all of the USAOs; developing and acquiring new systems, including case management systems, and providing support for existing systems; managing network, telephone, and video communications; and securing IT assets (data, applications, and supporting networks). In fiscal year 2003, EOUSA reports that it plans to spend approximately $125 million on about 20 initiatives. Roughly $110 million of this amount is to be spent on IT infrastructure and office automation projects (e.g., telecommunications programs). The remainder is to be spent on acquiring mission support systems (e.g., Enterprise Case Management System (ECMS), Victim Notification System) and maintaining existing ones. Figure 2 shows the breakdown of estimated expenditures for fiscal year 2003. Research into the IT management practices that are employed by leading public- and private-sector organizations has identified key institutional IT management disciplines that are interrelated and critical to ensuring, among other things, the integrity, security, and efficiency of IT systems. These disciplines are also addressed in legislation and federal guidance. 1. enterprise architecture management, which involves defining, maintaining, and implementing an institutional blueprint that defines both the business and the supporting technology of the organization’s current and target operating environments and a roadmap to achieve the target environment; 2. IT investment management, which involves selecting, controlling, and evaluating a portfolio of investments within the context of an enterprise architecture; 3. IT security management, which involves protecting the integrity, confidentiality, and availability of an organization’s IT assets (e.g., data, application systems, and networks) and reducing the risks of tampering, unauthorized intrusions and disclosures, and disruption of operations. 4. system acquisition management, which involves managing selected investments (system projects) in a manner that increases the probability of promised system capabilities being delivered on time and within budget. As we have previously reported, to successfully institutionalize these disciplines, organizations should develop integrated plans to guide their efforts that (1) specify measurable goals, objectives, and milestones; (2) specify needed resources; and (3) assign clear responsibility and accountability for accomplishing well-defined tasks. In addition, these plans should be approved by senior management. In implementing these plans, it is important that organizations allocate adequate resources and measure and report progress against planned commitments and that appropriate corrective actions be taken to address deviations. EOUSA has defined and implemented each of the four IT management disciplines mentioned above to some degree. However, none has been institutionalized, meaning that they are not fully defined in accordance with best practices and what has been defined has not been fully implemented. While these disciplines have been given attention since the recent appointment of the CIO, they have not been treated as priorities, in that action plans needed for successful institutionalization have not been developed or resourced. As a result, EOUSA is currently limited in its ability to meet Justice’s strategic goal of improving its IT systems, and the USAOs will be challenged in their ability to effectively and efficiently meet their mission goals and priorities. An enterprise architecture (EA) is an investment blueprint that defines, both in logical terms (including business functions and applications, work locations, information needs and users, and the interrelationships among these variables) and in technical terms (including hardware, software, data communications, and security) how an organization operates today (“as is”), how it intends to operate tomorrow (“to be”), and a roadmap for transitioning from today to tomorrow. The development, maintenance, and implementation of architectures are recognized hallmarks of successful public and private organizations. According to a guide published by the federal CIO Council,effective architecture management consists of a number of core elements. In February 2002, we published version 1.0 of our EA management maturity framework, which arranges the core elements of the CIO Council’s guide into five hierarchical stages. The framework provides an explicit benchmark for gauging the effectiveness of architecture management and provides a roadmap for making improvements. Table 1 summarizes the framework’s five stages of maturity. EOUSA has satisfied many of the framework’s core elements. Specifically, it has satisfied about 80 percent of the elements associated with building the EA management foundation—stage 2 of our EA management maturity framework—and half of the 12 core elements associated with higher maturity stages. At stage 2, it has established a chief architect and has selected a framework (the Federal Enterprise Architecture Framework) and, according to officials, selected a tool (the Enterprise Architecture Management System) to serve as a repository for its EA artifacts. At the higher stages of our framework, the CIO, for example, approved a version of an EA in May 2002 that describes the “as is” and “to be” environments for its core business functions. However, the office has yet to satisfy several of the core elements that are critical to effective EA management. For example, a committee or group representing the enterprise has not yet been established to guide and oversee the development of future versions of the architecture. Instead, the current version of its architecture has been primarily guided and directed by the CIO’s office. Until a committee or group representing the enterprise is established, there is increased risk that the architecture will not represent a corporate decision-making tool and will not be viewed and endorsed officewide as such a tool. Another example is the absence of a written or approved policy for maintaining the EA. Without a documented, approved policy for EA maintenance that, for example, assigns responsibility and accountability for configuration management and version control, EOUSA risks allowing its architecture to become outdated and irrelevant, thus limiting its effectiveness in selecting and guiding IT investments. EOUSA does not have a written plan of action for strengthening EA management and evolving the current version of its EA, because, according to the CIO, developing such a plan is not a priority. Table 2 shows EOUSA’s performance in addressing the core elements of our maturity framework. Effective IT investment management provides for evaluating each proposed and ongoing investment, based on EA alignment and measurable risks and returns and for selecting and controlling these investments as a portfolio of competing investment options. We have developed a framework that defines and measures an organization’s maturity in IT investment management (ITIM) and provides a basis for improving investment management. This framework, which is based on the IT investment management practices of leading private- and public-sector organizations, is structured to permit progression through five maturity stages (shown in table 3). Each maturity stage consists of critical processes and key practices that should be implemented for an organization to become more effective in managing its IT investments. According to the framework, the first key step toward an effective investment management process is to build the investment foundation. An organization with this foundation (stage 2 maturity) has attained repeatable, successful investment control processes and basic selection processes at the project level. Successful management at this level allows an organization to measure the progress of existing IT projects and to identify variances in cost, schedule, and performance expectations by following established, disciplined processes. The organization should also be able to take corrective action, if appropriate, and should possess basic capabilities for selecting new project proposals. To accomplish this level of basic control, an organization should establish an investment board, identify the business needs and opportunities to be addressed by each project, and use this knowledge in the selection of new proposals. The office has satisfied two of the critical processes for stage 2, but it has not satisfied the other three. Specifically, it has established an investment governing board, known as the Investment Review Board (IRB) and developed a guide to direct its operations. It is also defining project needs in alignment with the agency’s mission goals. However, the office has not, for example, defined procedures for project oversight. In addition, while an IT project and systems inventory exists as part of its “as is” architecture, a policy specifying how it will be used for investment management purposes has not been defined. Until EOUSA satisfies all critical processes for stage 2, it will not have the foundation it needs to build its investment management capability and it will not have an effective investment process. Table 4 summarizes our assessment of stage 2 capabilities. EOUSA has not demonstrated that maturing its IT investment management process is a priority by developing a plan for doing so and devoting resources to execute the plan. Until the office develops and implements a plan for establishing mature IT investment management processes (including all critical processes for building the investment management foundation), EOUSA will not have the full suite of capabilities it needs to ensure that project selection and control processes are repeatable or that it has the best mix of investments to meet agency priorities. Effective information security management is critical to EOUSA’s ability to ensure the reliability, availability, and confidentiality of its information assets, and thus it is fundamental to its ability to perform its mission. Our research into public- and private-sector organizations with strong information security programs shows that leading organizations’ programs include (1) establishing a central security focal point with appropriate resources, (2) continuously assessing business risks, (3) implementing and maintaining policies and controls, (4) promoting awareness, and (5) monitoring and evaluating the effectiveness of policies and controls. Currently, EOUSA is not fully satisfying any of these tenets of effective security. In addition, it has not demonstrated that institutionalizing effective security practices is a priority by developing a plan to guide its efforts to address security weaknesses and committing resources to perform essential security functions. Until such a plan is developed and effectively implemented, data, systems, and networks are at risk of inadvertent or deliberate misuse, fraud, improper disclosure, or destruction—possibly without detection. For example, the reliability and integrity of case information may be compromised, or sensitive crime victim information may be improperly disclosed. According to our framework, central management of key security functions is the foundation of an effective information security program, because it allows knowledge and expertise to be incorporated and applied on an enterprisewide basis. Having a central security focal point supported by appropriate resources is especially important for managing the increased risks associated with a highly connected computing environment, such as JCN, where security weaknesses in one segment of an organization’s network can compromise the security of another segment’s IT assets. In addition, centralizing the security management function provides a focal point for coordinating the activities associated with the other four elements of a strong information security program. In June 2001, EOUSA appointed a security officer with responsibility for centrally managing all aspects of IT security. However, EOUSA has not assigned sufficient staff to adequately carry out these responsibilities. For example, no staff has been assigned to monitor firewall logs or support the development of a centrally managed IT security training program— activities that fall under the security officer’s purview. Each of these activities is discussed further in the following sections. Officials said that they recognize the need for additional staff resources to perform these activities. They also stated that they were in the process of hiring two people to support security functions, but they agreed that this would still not allow for performance of key security responsibilities. Without an appropriately resourced security program, security breaches may not be detected or addressed in a timely manner, awareness of security requirements across the organization may be inconsistent, and vulnerabilities in the current IT environment may not be appropriately addressed. According to our framework, identifying and assessing business risks is an essential step in determining what IT security controls are needed and what resources should be invested in these controls. Federal guidance advocates performing risk assessments at least once every 3 years—or when a significant change in a system or the systems environment (e.g., new threats) has occurred. These assessments should address the risks that are introduced through connections to other networks and the impact on an organization’s mission should network security be compromised. In line with this guidance, EOUSA’s certification and accreditation process requires that a risk assessment be completed for each system before any office can use it. According to EOUSA, a major system that recently underwent EOUSA’s certification and accreditation process is the replacement for the existing firewall/VPN system. This system is intended to be the foundation for secure communications between EOUSA, Justice, and the geographically dispersed USAOs. Accordingly, we analyzed this system and found that while the firewall/VPN replacement system has been certified and accredited, the existing firewall/VPN system—which was deployed in 1996 and, as of May 9, 2003, was operating at 75 of the 240 sites—had not had a risk assessment performed and had not been certified and accredited. Officials told us that they have not performed such an assessment on this network because (1) it is not cost-effective to use limited resources to perform an assessment on a network that is to be fully replaced by June 30, 2003, and (2) the risks inherent in the network are minimal, given that it resides on Justice’s JCN, for which they said they assume Justice had performed risk assessments. We agree that it does not make sense at this point to perform a risk assessment on the existing firewall/VPN system given that the replacement system is expected to be fully deployed by the end of June 2003. However, this does not change the fact that EOUSA has operated the network for about 7 years without understanding its exposure to risk. This is particularly important, because EOUSA officials could not provide us with evidence to support the assumption that Justice had performed a risk assessment for JCN. Moreover, previous studies have shown that Justice has had long-standing weaknesses in several aspects of its IT security program. According to EOUSA, its recently established certification and accreditation program will not allow this to happen again. According to our framework, risk-based, cost-effective security policies and related technology controls (such as firewalls configured to explicit rules and intrusion detection devices located to monitor key network assets) and procedural controls (such as contingency plans) are needed to protect a system from compromise, subversion, and tampering. Federal and Justice guidance also advocate establishing these policies and controls. While EOUSA is guided by many Justice security policies, it has not yet implemented key security controls that are needed to satisfy them. For example, CIO officials told us that the existing firewall/VPN system, which, as of May 9, 2003, was operating at 75 sites, is not based on explicit firewall rules. Moreover, according to these officials, no intrusion detection devices monitor the wide-area network (WAN) routers, firewalls, and VPN devices. Rather, the intrusion detection devices that are currently implemented are located only within the local area network environment (i.e., within a USAO). Also, the contingency plan developed for the replacement firewall/VPN system was not prepared according to federal guidelines. For example, the contingency plan does not specify procedures for notifying recovery personnel or assessing damage to systems. CIO officials told us that they had not implemented these security controls because, as previously noted, they believe the risks inherent in the network are minimal given that it resides on Justice’s JCN, for which they said they assumed Justice had performed risk assessments. However, as previously stated, EOUSA provided no evidence to support this assumption, and Justice has had longstanding security weaknesses. Until EOUSA implements security controls, it may be unaware of vulnerabilities, increasing the risk that intruders may take control of network devices or that data passing through its firewalls can be read or manipulated. In addition, EOUSA may not be able to respond to security breaches adequately and in a timely manner. This is particularly threatening given the sensitivity of the information used by the USAOs in performing their work. According to our framework, promoting user awareness through education and training is essential to successfully implementing information security policies, achieving user understanding of security policies, and ensuring that security controls are instituted properly. This is because computer users—and others with access to information resources—are not able to comply with policies of which they are unaware or which they do not fully understand. Our framework suggests that a central group be tasked with educating users about current information security risks and helping to ensure consistent understanding and administration of policies. As previously mentioned, the security officer is responsible for promoting awareness of computer security. However, the security officer does not carry out this responsibility because provision of the resources to do so has not been viewed as an agency priority. According to the security officer, each district is thus responsible for managing its own IT training program, and the security officer does not know to what extent these programs address awareness of computer security. Without a centralized approach to security education and training, the security officer cannot adequately ensure that users are consistently aware of or fully understand the organizational policies and procedures with which they are expected to comply, thus risking the integrity, reliability, and confidentiality of data and systems. According to EOUSA officials, they plan to hire staff to develop and implement a centralized program by August 2003. Our framework recognizes the need to continuously monitor controls, through tests and evaluations, to ensure that the controls have been appropriately implemented and are operating as intended. Further, Justice’s policy requires annual testing of security controls and requires EOUSA to (1) verify that the policies and procedures in component organizations are consistent with this policy and (2) enforce compliance with component and Justice security policies, including identifying sanctions and penalties for noncompliance. In addition, our framework and related best practices—as well as Justice’s own policy—advocate keeping summary records of security incidents, to allow measurement of the frequency of various types of violations and the damage suffered from these incidents. This type of oversight is critical because it enables management to identify problems and their causes—and to make the necessary corrections. CIO officials told us that testing has never been conducted to determine whether EOUSA’s policies and procedures are consistent with Justice’s and whether security controls are generally effective. According to these officials, testing has not been a priority because they assumed that Justice was performing tests of the WAN environment. However, Justice officials told us that, although they had evaluated the contractor’s management of the WAN’s circuits, they had not performed any tests to determine the effectiveness of technical and other controls associated with the WAN. The lack of testing heightens the risk that individuals both within and outside Justice could compromise EOUSA’s external and internal security controls to gain extensive unauthorized access to its networks and to networks to which it is connected. EOUSA officials also told us that, contrary to Justice’s policy, they do not maintain summary records of security incidents. Specifically, the production firewall/VPN software and routers at over 240 locations do not have audit logs that are activated, and the replacement routers, firewalls, and VPN devices are being implemented with no audit logs activated. According to these officials, they have not activated the audit logs because resources have not been allocated to provide for this security control. This lack of auditing heightens the risk of undetected intruders using EOUSA’s systems to modify, bypass, or negate its firewalls and routers. Additionally, without these audit logs the office would be unable to reconstruct security- related incidents. Rigorous and disciplined system acquisition processes and practices can reduce the risk of fielding systems that do not perform as intended, are delivered late, or cost more than planned. The Software Engineering Institute (SEI), recognized for its expertise in acquiring software-intensive systems, has published models and guides for determining an organization’s acquisition process maturity. One of those models, referred to as the Software Acquisition Capability Maturity Model (SA-CMM), addresses an organization’s acquisition management ability. The SA-CMM model defines organizational maturity according to five levels (see table 5). According to SEI, level 2 (the repeatable level) demonstrates that basic management processes, known as key process areas, have been established to track performance, cost, and schedule, and that the organization has the means to repeat earlier successes on similar projects. An organization that has these processes in place is in a much better position to successfully acquire software-intensive systems than an organization that does not. As a Justice component, EOUSA must comply with all departmental policies and procedures, including Justice’s system development life-cycle management guidance. Since EOUSA officials told us that the Enterprise Case Management System (ECMS), which is intended to be the enterprise solution for managing and tracking case workload within the USAOs, is the first acquisition effort to follow Justice guidance from its inception, we compared this project, and the Justice guidance used to manage it, against SEI’s SA-CMM, and we found that the project was being managed in accordance with the majority of the applicable level 2 practices. Table 6 represents a summary of our findings for this acquisition (see app. I for an expanded analysis). More specifically, the office has performed all of the key practices in the requirements development and management and project management key process areas. These include (1) establishing a written policy for developing and managing system-related contractual requirements; (2) having bi-directional traceability between the contractual requirements and the contractor’s work products and services; (3) measuring and reporting to management on the status of requirements development and management activities; (4) designating responsibility for project management; (5) keeping plans current during the life of the project as re- planning occurs, issues are resolved, requirements are changed, and new risks are discovered; and (6) tracking the risks associated with cost, schedule, resources, and the technical aspects of the project. EOUSA has also performed the majority of the key practices in the remaining four process areas. However, it does not have written policies for either the contract tracking and oversight or the software acquisition planning key process areas. Policies in general are key to establishing well-defined and enduring processes and procedures. In these two areas, policies would ensure that the office’s approach to tracking and overseeing contractors and planning the acquisition is defined in a repeatable and measurable fashion. In addition, during the solicitation process, the office did not document its plans for solicitation activities, which would provide those involved with objectives for the solicitation process and a defined way to manage and control solicitation activities and decisions. In evaluation, the office has yet to satisfy 9 of the 15 required practices. Officials told us that they intend to satisfy them but that they do not have a plan for addressing those practices or for implementing all of the required practices on future system acquisitions. According to these officials, developing such a plan is currently not a priority. By developing and implementing a plan for satisfying all of these key process areas on ECMS and future acquisitions, EOUSA can increase its chances of successfully acquiring needed system capabilities on time and within budget. EOUSA has taken important steps to define and implement four key IT management disciplines. Nevertheless, key aspects of each discipline have yet to be institutionalized, leaving the office challenged in its ability to achieve the department’s strategic goal of improving the integrity, security, and efficiency of its IT systems. Critical to the office’s success going forward will be treating institutionalization of each of these management disciplines as priority matters by developing integrated plans of action for addressing the weaknesses that we identified in each and effectively implementing these plans—including assignment of appropriate resources and measurement and reporting of progress. Without taking these steps, EOUSA is unlikely to fully establish the IT management capabilities it needs. To strengthen the office’s IT management capacity and increase its chances of improving the integrity, security, and efficiency of its IT systems, we recommend that the Attorney General direct the EOUSA Director to treat institutionalization of EA management, IT investment management, IT security management, and system acquisition management as priorities by developing and implementing action plans to address the weaknesses in each discipline that are identified in this report. These plans should, at a minimum, provide for accomplishing the following: establish a committee or group representing the enterprise that is responsible for directing, overseeing, or approving the EA; ensure that EA products are under configuration management; define, approve, and implement a policy for IT investment compliance specify metrics for measuring EA benefits; and define, approve, and implement a policy for maintaining the EA. For IT investment management, regularly oversee each IT project’s progress toward cost and schedule milestones, using established criteria, and require corrective actions when milestones have not been achieved; define and implement a policy for using the IT project and systems inventory for managerial decision making; and ensure that an established, structured process is used to select new IT proposals. For IT security management, allocate the appropriate resources to enable the responsibilities of the security officer to be fully performed; ensure that risk assessments are performed on all existing and future implement intrusion detection devices to monitor activity at the routers, firewalls, and VPN devices, and implement other network security controls as noted in the report; develop and implement a centralized approach to security education and training; and perform regular tests to determine compliance with policies and procedures and the effectiveness of security controls. For system acquisition management, develop and implement a policy for contract tracking and oversight; develop and implement a policy for system acquisition planning; and address the remaining key practices associated with evaluation as ECMS progresses in the life cycle; and ensure that the Software Engineering Institute acquisition practices identified in this report are used in future system acquisitions. In developing these plans, the Director should ensure that each plan (1) is integrated with the other three plans; (2) defines clear and measurable goals, objectives, and milestones; (3) specifies resource needs; and (4) assigns clear responsibility and accountability for implementing the plan. In implementing each plan, the Director should ensure that the needed resources are provided and that progress is measured and reported periodically to the Attorney General. In written comments on a draft of this report signed by the EOUSA Director (reprinted in app. III), the office agreed with our findings relative to enterprise architecture management, IT investment management, and system acquisition management. EOUSA also agreed with our recommendations in these three areas and stated that it intends to implement the recommendations. However, EOUSA stated that it disagreed with our findings and our recommendations regarding information security management, although at the same time it cited certain actions that it intends to take, such as implementing a centralized security training program and monitoring security audit logs, that are consistent with our security findings and associated recommendations. Further, the office disagreed that the state of its efforts to institutionalize best management practices in the four areas is due to it not treating each area as an office priority. It also disagreed with our conclusion that the state of its efforts to institutionalize best practices currently limits its ability to meet Justice’s strategic goal of improving its IT systems, and that the USAOs will be challenged in their ability to effectively and efficiently meet mission goals and priorities. Each of these three areas of disagreement is addressed below. First, with respect to information security management, EOUSA stated that it has one of the strongest security programs in Justice, and perhaps the federal government. To support this statement, the office cited 10 security initiatives it has implemented, such as certification and accreditation of more than eight systems, real-time encryption of all data in laptops and handheld devices, and conduct of vulnerability assessments and penetration testing. It also noted, among other things, that it had added 10 field security positions and 2 headquarters positions, and that its data are monitored 24 hours a day, seven days a week, and have never been compromised. We do not question these statements concerning the office’s information security program and associated activities because (1) the purpose and scope of our review was not to compare EOUSA to other Justice component organizations or other federal agencies, and thus EOUSA’s relative standing is not relevant to the findings in our report and (2) the message of our report is not that EOUSA has not taken steps to improve its information security posture, but rather that the office’s information security management efforts, including ongoing and complete improvement steps, are weak in a number of areas relative to information security management best practices. Accordingly, we make recommendations aimed at addressing identified weaknesses, including a recommendation to implement network intrusion detection devices and other security controls. While EOUSA’s comments cited plans that are consistent with many of our security-related recommendations, it disagreed with the recommendation relative to its wide area network on the grounds that this network is managed, secured, and monitored by Justice and Sprint. We understand that the WAN is not managed by EOUSA, and accordingly our recommendation was aimed at actively monitoring the network routers, firewalls, and VPN devices, which are managed by EOUSA. To avoid any confusion about this recommendation, we have clarified its wording to better reflect our intentions. Similarly, in light of the recent progress that EOUSA has made replacing its VPN system, we have adjusted our finding and recommendation concerning the office’s exposure to risk from its old VPN system. Second, with respect to our statements that EOUSA has not treated institutionalization of each of the four IT management disciplines— enterprise architecture management, IT investment management, system acquisition management, and information security management—as agency priorities, the office stated that these statements were unfair and that it did not agree with them. To support its position, EOUSA made the following two points: (1) it has made tremendous progress, as evidenced by our report recognizing those best practices that it is satisfying, and (2) it has received the highest level of support from Justice, as evidenced by the establishment of the EOUSA CIO position in 2001, the progress that has been made in the last 2 years compared to other Justice component organizations, and EOUSA’s being viewed by Justice senior management as a leader in IT management. We do not challenge EOUSA’s two points because they are not relevant to our position regarding treating institutionalization of each of the four IT management disciplines as agency priorities. Our position is based on two facts that EOUSA did not dispute: (1) plans for addressing the weaknesses cited in our report do not exist and (2) limitations in resources to address these weaknesses were cited by EOUSA officials as the reason why the weaknesses exist. In our view, if each of these areas were an agency priority, then plans would be in place to address the weaknesses, and resources to execute the plans would be committed. Third, with respect to our conclusion that EOUSA is currently limited in its ability to meet Justice’s strategic goal of improving its IT systems, and that the USAOs are thereby challenged in their ability to effectively and efficiently meet their mission goals and objectives, the office disagreed but did not offer any comments to counter our conclusion beyond those cited above. Given that any organization’s ability to effectively leverage technology is determined in large part by its institutionalized capabilities in these four IT disciplines, we have not modified our conclusion. EOUSA provided additional comments that have been incorporated in the report as appropriate. EOUSA’s written comments are reproduced in appendix III, along with our detailed evaluation of each comment. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to interested congressional committees. We will also send copies to the Director of the Office of Management and Budget, the Attorney General of the United States, the EOUSA Director, and the EOUSA CIO. We will also send copies to others upon request. In addition, copies will be available at no charge on our Web site at www.gao.gov. Should you or your offices have questions on matters discussed in this report, please contact me at (202) 512-3439. I can also be reached by E-mail at [email protected]. An additional GAO contact and staff acknowledgments are listed in appendix IV. Our objective was to determine the extent to which the Executive Office for United States Attorneys (EOUSA) has institutionalized key information technology (IT) management capabilities to achieve the Department of Justice’s strategic goal of improving the integrity, security, and efficiency of its IT systems. To meet this objective, we focused on whether EOUSA had institutionalized four key IT management disciplines: enterprise architecture management, IT investment management, information security management, and system acquisition management. To evaluate EOUSA’s enterprise architecture (EA) management, we first solicited responses to an EA management questionnaire, reviewed EA plans and products, and interviewed officials to verify their responses. Next, we compared the information that we had collected with GAO’s February 2002 EA management maturity framework to determine the extent to which EOUSA was employing effective EA management practices. This framework is based on the Practical Guide to Federal Enterprise Architecture, published by the Chief Information Officers’ (CIO) Council. We did not use the revised framework issued in April 2003 because, by then, we had already completed our work. To evaluate EOUSA’s IT investment management (ITIM), we used GAO’s ITIM framework and assessed the extent to which EOUSA had satisfied the critical processes associated with stage 2 of the five-stage framework—building the investment foundation. We focused on stage 2 processes because officials told us that they had only recently begun defining and implementing the specific practices that are associated with this stage. To conduct our assessment, we reviewed relevant EOUSA and Justice policies, procedures, guidance, and documentation—including the office’s investment management guide and associated memorandums, project proposals, and budget documents. We also interviewed the CIO and the senior official who is responsible for implementing IT investment management. We then compared this information with our maturity framework to determine the extent to which the office was employing effective IT investment management practices. To evaluate EOUSA’s information security management, we used our executive guide for information security management, as well as Justice policy and guidance and relevant EOUSA U.S. Attorney Procedures.We reviewed internal Justice and other reports identifying security weaknesses at Justice and EOUSA and information on how these weaknesses will be addressed. We also reviewed the certification and accreditation package and the deployment schedule for the virtual private networkthat the office is currently deploying, because EOUSA and the USAOs rely on this network to carry out its mission. We interviewed Justice officials and EOUSA officials within the Office of the CIO about the office’s security management. To evaluate EOUSA’s system acquisition management, we used the Software Engineering Institute’s Software Acquisition Capability Maturity Model, focusing on six of the seven key process areas that are defined for level 2 of the model’s five-level maturity scale. We focused on level 2 processes because they represent the minimum level of maturity needed to effectively manage system acquisition projects. We used the office’s acquisition of the Enterprise Case Management System as a case study because officials stated that it is representative of how they intend to acquire systems. In addition, this system will be critical in providing fundamental support to the U.S. Attorneys as they work to achieve mission goals. We reviewed key project documentation, such as the concept of operations, project plan, and requirements traceability matrix, and we interviewed system acquisition officials. We also reviewed the Justice guidance used to manage the project. We then compared this information to the Software Acquisition Capability Maturity Model to determine the extent to which the office was employing effective system acquisition management practices. We performed our work at EOUSA headquarters in Washington, D.C., from November 2002 to May 2003, in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Justice’s letter dated June 16, 2003. 1. We disagree. Our position that institutionalization has not been a priority is based on two facts that EOUSA did not dispute: (1) plans for addressing the weaknesses cited in our report do not exist and (2) limitations in resources to address these weaknesses were cited by EOUSA officials as the reason why the weaknesses exist. If each of these areas were an agency priority, then plans would be in place to address the weaknesses, and resources to execute the plans would be committed. 2. We do not question EOUSA’s statement that it has made “tremendous progress.” Our work focused on determining the extent to which EOUSA currently satisfies key practices in the four IT management disciplines. It did not include developing a baseline from which to measure progress. To EOUSA’s credit, our review showed that the office has satisfied many key practices in each discipline, and we have noted this in our report. 3. We agree and include both of these facts in our report. 4. We disagree. EOUSA’s comments did not include any information to refute our conclusion. Given that it did not have a plan for fully implementing best practices for each discipline, and had not allocated adequate resources to support such a plan, we have not modified our conclusion. 5. We do not question these statements about the position of EOUSA and the USAOs relative to other Justice components. Such a comparison was not part of the scope of our work. 6. We disagree. EOUSA has not gained this maturity level. Rather, according to EOUSA, the contractor that maintains its LIONS application is certified as a level 3 software developer. In contrast, our work focused on EOUSA’s capabilities as a software acquirer, and thus addresses a different organization, discipline, and maturity model. 7. See comment 1. 8. We do not question this statement because the position of EOUSA and the USAOs relative to other Justice components or other law enforcement entities was not part of the scope of our work. 9. As noted in our report, EOUSA satisfied about 80 percent of the elements of just stage 2 of the EA management framework. It has satisfied about 60 percent of the elements (12 out of 19) of the entire framework. 10. We have modified the report to reflect this comment. 11. We agree. However, according to GAO’s IT Investment Management Framework, to satisfy the proposal selection critical process, EOUSA would need to demonstrate the use of the criteria it has defined. Because it has not yet done so, it is not satisfying the critical process and thus has met two out of five elements of stage 2 of the framework. 12. We disagree. Our assessment is based on EOUSA’s satisfaction of key practices laid out in our executive guide for information security management. This assessment showed that EOUSA has not fully satisfied any of these key practices. For example, EOUSA does not (1) have a central security focal point with appropriate resources, (2) adequately promote user awareness, and (3) regularly monitor the effectiveness of security controls. Until EOUSA addresses these and other security weaknesses we identify in our report, it will not have implemented effective security practices. 13. See comment 8. 14. We do not question this statement because determining whether the data of the United States Attorneys have never been compromised and are monitored 24 hours a day, 7 days a week was not within the scope of our work and EOUSA did not provide any evidence supporting its assertions. 15. See comment 1. Additionally, our finding is that the institutionalization of information security management has not been an agency priority. 16. We do not question these security initiatives. Additionally, we emphasize that our message is not that EOUSA has not taken steps to improve its information security posture, but rather that the office’s information security management efforts, including ongoing and completed improvement steps, are weak in a number of areas relative to information security management best practices. 17. We agree, but would add that our recommendation could be addressed by actively monitoring activity at the routers, firewalls, and wide area network devices, which we understand are remotely managed by EOUSA. To avoid any potential confusion on this point, we have clarified our recommendation. Implementing an intrusion detection system to monitor activity at the routers, firewalls, and other network devices would enable EOUSA to detect hostile attempts to manage those devices. 18. We do not question EOUSA’s statement that it has been working to resolve vulnerabilities identified during a security audit conducted by the Justice Inspector General. The scope of the Inspector General’s audit, however, was narrower than ours in that it focused on EOUSA’s local area network environment. 19. We agree that given EOUSA’s recent progress in deploying the replacement network its exposure to risk is currently limited. We have modified the security risk assessment section of the report and the associated recommendation to reflect this change in circumstances. 20. We agree and thus do not conclude that EOUSA’s risk assessment program is inadequate. Rather, based on the fact that a risk assessment was not performed on the network that EOUSA has operated since 1996 and, until recently, relied exclusively on, we conclude that EOUSA has not always performed risk assessments. Additionally, to recognize the recent change in circumstances we have modified our recommendation concerning risk assessments. 21. We do not question these statements. We support the use of automated tools to review audit logs, particularly because these logs were not being reviewed, and EOUSA attributed this to a lack of resources. We also support EOUSA’s plan to conduct regular tests to determine compliance with policies and procedures. Both of these planned actions are consistent with our recommendations. 22. We do not question this statement. However, as noted in our report, the office did not have a plan to address the issues that are discussed in our report. 23. We do not question these statements because our review did not address contingency plans for all certified and accredited systems. As stated in the report, while a contingency plan was developed for the replacement network, it was not prepared in accordance with federal guidelines. For example, the plan did not specify procedures for notifying recovery personnel. To clarify our position, we have added examples to the report of this plan’s noncompliance with federal guidelines. 24. We support EOUSA’s stated commitment to establish a centralized security training program. Establishing such a program is consistent with our recommendations. 25. We have modified the report to reflect that the Enterprise Case Management System is the first acquisition to follow the Justice life- cycle methodology from its inception. 26. See comment 6. 27. We have modified the report to reflect that EOUSA’s acquisitions are processed through the department and must comply with all departmental policies and procedures. In addition to the individual named above, Nabajyoti Barkakati, Jamey Collins, Joanne Fiorino, Anh Q. Le, Sabine R. Paul, and William F. Wadsworth made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Executive Office for United States Attorneys (EOUSA) of the Department of Justice is responsible for managing information technology (IT) resources for the United States Attorneys' Offices. GAO was asked to determine the extent to which EOUSA has institutionalized key IT management capabilities that are critical to achieving Justice's strategic goal of improving the integrity, security, and efficiency of its IT systems. To varying degrees, EOUSA has partially defined and implemented certain IT management disciplines that are critical to successfully achieving the Justice Department's strategic goal of improving the integrity, security, and efficiency of its IT systems. However, it has yet to institutionalize any of these disciplines, meaning that it has not defined existing policies and procedures in accordance with relevant guidance, and it has yet to fully implement what it has defined. In particular, while EOUSA has developed an enterprise architecture--a blueprint for guiding operational and technological change--the architecture was not developed in accordance with certain best practices. In addition, while the office has implemented certain process controls for selecting, controlling, and evaluating its IT investments, it has not yet implemented others that are necessary in order to develop an effective foundation for investment management. Further, it has not implemented important management practices that are associated with an effective security program. In contrast, it has defined--and is implementing on a major system that we reviewed--most, but not all, of the management practices associated with effective systems acquisition. Institutionalization of these IT management disciplines has not been an agency priority and is not being guided by plans of action or sufficient resources. Until each discipline is given the priority it deserves, EOUSA will not have the IT management capabilities it needs to effectively achieve the department's strategic goal of improving the integrity, security, and efficiency of its IT systems.